All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrea Righi <righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Paul Menage <menage-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: randy.dunlap-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org,
	Carl Henrik Lunde
	<chlunde-om2ZC0WAoZIXWF+eFR7m5Q@public.gmane.org>,
	eric.rannaud-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	Balbir Singh
	<balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>,
	fernando-gVGce1chcLdL9jVzuh4AOg@public.gmane.org,
	Andrea Righi
	<righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	dradford-cT2on/YLNlBWk0Htik3J/w@public.gmane.org,
	agk-9JcytcrH/bA+uJoB2kUjGw@public.gmane.org,
	subrata-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org,
	axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	dave-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org,
	matt-cT2on/YLNlBWk0Htik3J/w@public.gmane.org,
	roberto-5KDOxZqKugI@public.gmane.org,
	ngupta-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org
Subject: [PATCH 1/9] io-throttle documentation
Date: Tue, 14 Apr 2009 22:21:12 +0200	[thread overview]
Message-ID: <1239740480-28125-2-git-send-email-righi.andrea@gmail.com> (raw)
In-Reply-To: <1239740480-28125-1-git-send-email-righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

Documentation of the block device I/O controller: description, usage,
advantages and design.

Signed-off-by: Andrea Righi <righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 Documentation/cgroups/io-throttle.txt |  451 +++++++++++++++++++++++++++++++++
 1 files changed, 451 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/cgroups/io-throttle.txt

diff --git a/Documentation/cgroups/io-throttle.txt b/Documentation/cgroups/io-throttle.txt
new file mode 100644
index 0000000..7650601
--- /dev/null
+++ b/Documentation/cgroups/io-throttle.txt
@@ -0,0 +1,451 @@
+
+               Block device I/O bandwidth controller
+
+----------------------------------------------------------------------
+1. DESCRIPTION
+
+This controller allows to limit the I/O bandwidth of specific block devices for
+specific process containers (cgroups [1]) imposing additional delays on I/O
+requests for those processes that exceed the limits defined in the control
+group filesystem.
+
+Bandwidth limiting rules offer better control over QoS with respect to priority
+or weight-based solutions that only give information about applications'
+relative performance requirements. Nevertheless, priority based solutions are
+affected by performance bursts, when only low-priority requests are submitted
+to a general purpose resource dispatcher.
+
+The goal of the I/O bandwidth controller is to improve performance
+predictability from the applications' point of view and provide performance
+isolation of different control groups sharing the same block devices.
+
+NOTE #1: If you're looking for a way to improve the overall throughput of the
+system probably you should use a different solution.
+
+NOTE #2: The current implementation does not guarantee minimum bandwidth
+levels, the QoS is implemented only slowing down I/O "traffic" that exceeds the
+limits specified by the user; minimum I/O rate thresholds are supposed to be
+guaranteed if the user configures a proper I/O bandwidth partitioning of the
+block devices shared among the different cgroups (theoretically if the sum of
+all the single limits defined for a block device doesn't exceed the total I/O
+bandwidth of that device).
+
+----------------------------------------------------------------------
+2. USER INTERFACE
+
+A new I/O limitation rule is described using the files:
+- blockio.bandwidth-max
+- blockio.iops-max
+
+The I/O bandwidth (blockio.bandwidth-max) can be used to limit the throughput
+of a certain cgroup, while blockio.iops-max can be used to throttle cgroups
+containing applications doing a sparse/seeky I/O workload. Any combination of
+them can be used to define more complex I/O limiting rules, expressed both in
+terms of iops/s and bandwidth.
+
+The same files can be used to set multiple rules for different block devices
+relative to the same cgroup.
+
+The following syntax can be used to configure any limiting rule:
+
+# /bin/echo DEV:LIMIT:STRATEGY:BUCKET_SIZE > CGROUP/FILE
+
+- DEV is the name of the device the limiting rule is applied to.
+
+- LIMIT is the maximum I/O activity allowed on DEV by CGROUP; LIMIT can
+  represent a bandwidth limitation (expressed in bytes/s) when writing to
+  blockio.bandwidth-max, or a limitation to the maximum I/O operations per
+  second (expressed in iops/s) issued by CGROUP.
+
+  A generic I/O limiting rule for a block device DEV can be removed setting the
+  LIMIT to 0.
+
+- STRATEGY is the throttling strategy used to throttle the applications' I/O
+  requests from/to device DEV. At the moment two different strategies can be
+  used [2][3]:
+
+  0 = leaky bucket: the controller accepts at most B bytes (B = LIMIT * time)
+		    or O operations (O = LIMIT * time); further I/O requests
+		    are delayed scheduling a timeout for the tasks that made
+		    those requests.
+
+            Different I/O flow
+               | | |
+               | v |
+               |   v
+               v
+              .......
+              \     /
+               \   /  leaky-bucket
+                ---
+                |||
+                vvv
+             Smoothed I/O flow
+
+  1 = token bucket: LIMIT tokens are added to the bucket every seconds; the
+		    bucket can hold at the most BUCKET_SIZE tokens; I/O
+		    requests are accepted if there are available tokens in the
+		    bucket; when a request of N bytes arrives N tokens are
+		    removed from the bucket; if fewer than N tokens are
+		    available the request is delayed until a sufficient amount
+		    of token is available in the bucket.
+
+            Tokens (I/O rate)
+                o
+                o
+                o
+              ....... <--.
+              \     /    | Bucket size (burst limit)
+               \ooo/     |
+                ---   <--'
+                 |ooo
+    Incoming --->|---> Conforming
+    I/O          |oo   I/O
+    requests  -->|-->  requests
+                 |
+            ---->|
+
+  Leaky bucket is more precise than token bucket to respect the limits, because
+  bursty workloads are always smoothed. Token bucket, instead, allows a small
+  irregularity degree in the I/O flows (burst limit), and, for this, it is
+  better in terms of efficiency (bursty workloads are not smoothed when there
+  are sufficient tokens in the bucket).
+
+- BUCKET_SIZE is used only with token bucket (STRATEGY == 1) and defines the
+  size of the bucket in bytes (blockio.bandwidth-max) or in I/O operations
+  (blockio.iops-max).
+
+- CGROUP is the name of the limited process container.
+
+Also the following syntaxes are allowed:
+
+- remove an I/O bandwidth limiting rule
+# /bin/echo DEV:0 > CGROUP/blockio.bandwidth-max
+
+- configure a limiting rule using leaky bucket throttling (ignore bucket size):
+# /bin/echo DEV:LIMIT:0 > CGROUP/blockio.bandwidth-max
+
+- configure a limiting rule using token bucket throttling
+  (with bucket size == LIMIT):
+# /bin/echo DEV:LIMIT:1 > CGROUP/blockio.bandwidth-max
+
+2.2. Show I/O limiting rules
+
+All the defined rules and statistics for a specific cgroup can be shown reading
+the files blockio.bandwidth-max for bandwidth constraints and blockio.iops-max
+for I/O operations per second constraints.
+
+The following syntax is used:
+
+$ cat CGROUP/blockio.bandwidth-max
+MAJOR MINOR LIMIT STRATEGY LEAKY_STAT BUCKET_SIZE BUCKET_FILL TIME_DELTA
+
+- MAJOR is the major device number of DEV (defined above)
+
+- MINOR is the minor device number of DEV (defined above)
+
+- LIMIT, STRATEGY and BUCKET_SIZE are the same parameters defined above
+
+- LEAKY_STAT is the amount of bytes (blockio.bandwidth-max) or I/O operations
+  (blockio.iops-max) currently allowed by the I/O controller (only used with
+  leaky bucket strategy - STRATEGY == 0)
+
+- BUCKET_FILL represents the amount of tokens present in the bucket (only used
+  with token bucket strategy - STRATEGY == 1)
+
+- TIME_DELTA can be one of the following:
+  - the amount of jiffies elapsed from the last I/O request (token bucket)
+  - the amount of jiffies during which the bytes or the number of I/O
+    operations given by LEAKY_STAT have been accumulated (leaky bucket)
+
+Multiple per-block device rules are reported in multiple rows
+(DEVi, i = 1 ..  n):
+
+$ cat CGROUP/blockio.bandwidth-max
+MAJOR1 MINOR1 BW1 STRATEGY1 LEAKY_STAT1 BUCKET_SIZE1 BUCKET_FILL1 TIME_DELTA1
+MAJOR1 MINOR1 BW2 STRATEGY2 LEAKY_STAT2 BUCKET_SIZE2 BUCKET_FILL2 TIME_DELTA2
+...
+MAJORn MINORn BWn STRATEGYn LEAKY_STATn BUCKET_SIZEn BUCKET_FILLn TIME_DELTAn
+
+The same fields are used to describe I/O operations/sec rules. The only
+difference is that the cost of each I/O operation is scaled up by a factor of
+1000. This allows to apply better fine grained sleeps and provide a more
+precise throttling.
+
+$ cat CGROUP/blockio.iops-max
+MAJOR MINOR LIMITx1000 STRATEGY LEAKY_STATx1000 BUCKET_SIZEx1000 BUCKET_FILLx1000 TIME_DELTA
+...
+
+2.3. Additional I/O statistics
+
+Additional cgroup I/O throttling statistics are reported in
+blockio.throttlecnt:
+
+$ cat CGROUP/blockio.throttlecnt
+MAJOR MINOR BW_COUNTER BW_SLEEP IOPS_COUNTER IOPS_SLEEP
+
+ - MAJOR, MINOR are respectively the major and the minor number of the device
+   the following statistics refer to
+ - BW_COUNTER gives the number of times that the cgroup bandwidth limit of
+   this particular device was exceeded
+ - BW_SLEEP is the amount of sleep time measured in clock ticks (divide
+   by sysconf(_SC_CLK_TCK)) imposed to the processes of this cgroup that
+   exceeded the bandwidth limit for this particular device
+ - IOPS_COUNTER gives the number of times that the cgroup I/O operation per
+   second limit of this particular device was exceeded
+ - IOPS_SLEEP is the amount of sleep time measured in clock ticks (divide
+   by sysconf(_SC_CLK_TCK)) imposed to the processes of this cgroup that
+   exceeded the I/O operations per second limit for this particular device
+
+Example:
+$ cat CGROUP/blockio.throttlecnt
+8 0 0 0 0 0
+^ ^ ^ ^ ^ ^
+ \ \ \ \ \ \___iops sleep (in clock ticks)
+  \ \ \ \ \____iops throttle counter
+   \ \ \ \_____bandwidth sleep (in clock ticks)
+    \ \ \______bandwidth throttle counter
+     \ \_______minor dev. number
+      \________major dev. number
+
+Distinct statistics for each process are reported in
+/proc/PID/io-throttle-stat:
+
+$ cat /proc/PID/io-throttle-stat
+BW_COUNTER BW_SLEEP IOPS_COUNTER IOPS_SLEEP
+
+Example:
+$ cat /proc/$$/io-throttle-stat
+0 0 0 0
+^ ^ ^ ^
+ \ \ \ \_____global iops sleep (in clock ticks)
+  \ \ \______global iops counter
+   \ \_______global bandwidth sleep (clock ticks)
+    \________global bandwidth counter
+
+2.5. Generic usage examples
+
+* Mount the cgroup filesystem (blockio subsystem):
+  # mkdir /mnt/cgroup
+  # mount -t cgroup -oblockio blockio /mnt/cgroup
+
+* Instantiate the new cgroup "foo":
+  # mkdir /mnt/cgroup/foo
+  --> the cgroup foo has been created
+
+* Add the current shell process to the cgroup "foo":
+  # /bin/echo $$ > /mnt/cgroup/foo/tasks
+  --> the current shell has been added to the cgroup "foo"
+
+* Give maximum 1MiB/s of I/O bandwidth on /dev/sda for the cgroup "foo", using
+  leaky bucket throttling strategy:
+  # /bin/echo /dev/sda:$((1024 * 1024)):0:0 > \
+  > /mnt/cgroup/foo/blockio.bandwidth-max
+  # sh
+  --> the subshell 'sh' is running in cgroup "foo" and it can use a maximum I/O
+      bandwidth of 1MiB/s on /dev/sda
+
+* Give maximum 8MiB/s of I/O bandwidth on /dev/sdb for the cgroup "foo", using
+  token bucket throttling strategy, bucket size = 8MiB:
+  # /bin/echo /dev/sdb:$((8 * 1024 * 1024)):1:$((8 * 1024 * 1024)) > \
+  > /mnt/cgroup/foo/blockio.bandwidth-max
+  # sh
+  --> the subshell 'sh' is running in cgroup "foo" and it can use a maximum I/O
+      bandwidth of 1MiB/s on /dev/sda (controlled by leaky bucket throttling)
+      and 8MiB/s on /dev/sdb (controlled by token bucket throttling)
+
+* Run a benchmark doing I/O on /dev/sda and /dev/sdb; I/O limits and usage
+  defined for cgroup "foo" can be shown as following:
+  # cat /mnt/cgroup/foo/blockio.bandwidth-max
+  8 16 8388608 1 0 8388608 -522560 48
+  8 0 1048576 0 737280 0 0 216
+
+* Extend the maximum I/O bandwidth for the cgroup "foo" to 16MiB/s on /dev/sda:
+  # /bin/echo /dev/sda:$((16 * 1024 * 1024)):0:0 > \
+  > /mnt/cgroup/foo/blockio.bandwidth-max
+  # cat /mnt/cgroup/foo/blockio.bandwidth-max
+  8 16 8388608 1 0 8388608 -84432 206436
+  8 0 16777216 0 0 0 0 15212
+
+* Remove limiting rule on /dev/sdb for cgroup "foo":
+  # /bin/echo /dev/sdb:0:0:0 > /mnt/cgroup/foo/blockio.bandwidth-max
+  # cat /mnt/cgroup/foo/blockio.bandwidth-max
+  8 0 16777216 0 0 0 0 110388
+
+* Set a maximum of 100 I/O operations/sec (leaky bucket strategy) to /dev/sdc
+  for cgroup "foo":
+  # /bin/echo /dev/sdc:100:0 > /mnt/cgroup/foo/blockio.iops-max
+  # cat /mnt/cgroup/foo/blockio.iops-max
+  8 32 100000 0 846000 0 2113
+          ^        ^
+         /________/
+        /
+  Remember: these values are scaled up by a factor of 1000 to apply a fine
+  grained throttling (i.e. LIMIT == 100000 means a maximum of 100 I/O operation
+  per second)
+
+* Remove limiting rule for I/O operations from /dev/sdc for cgroup "foo":
+  # /bin/echo /dev/sdc:0 > /mnt/cgroup/foo/blockio.iops-max
+
+----------------------------------------------------------------------
+3. ADVANTAGES OF PROVIDING THIS FEATURE
+
+* Allow I/O traffic shaping for block device shared among different cgroups
+* Improve I/O performance predictability on block devices shared between
+  different cgroups
+* Limiting rules do not depend of the particular I/O scheduler (anticipatory,
+  deadline, CFQ, noop) and/or the type of the underlying block devices
+* The bandwidth limitations are guaranteed both for synchronous and
+  asynchronous operations, even the I/O passing through the page cache or
+  buffers and not only direct I/O (see below for details)
+* It is possible to implement a simple user-space application to dynamically
+  adjust the I/O workload of different process containers at run-time,
+  according to the particular users' requirements and applications' performance
+  constraints
+
+----------------------------------------------------------------------
+4. DESIGN
+
+The I/O throttling is performed imposing an explicit timeout on the processes
+that exceed the I/O limits dedicated to the cgroup they belong to. I/O
+accounting happens per cgroup.
+
+Only the actual I/O that flows in the block devices is considered. Multiple
+re-reads of pages already present in the page cache as well as re-writes of
+dirty pages are not considered to account and throttle the I/O activity, since
+they don't actually generate any real I/O operation.
+
+This means that a process that re-reads or re-writes multiple times the same
+blocks of a file is affected by the I/O limitations only for the actual I/O
+performed from/to the underlying block devices.
+
+4.1. Synchronous I/O tracking and throttling
+
+The io-throttle controller just works as expected for synchronous (read and
+write) operations: the real I/O activity is reduced synchronously according to
+the defined limitations.
+
+If the operation is synchronous we automatically know that the context of the
+request is the current task and so we can charge the cgroup the current task
+belongs to. And throttle the current task as well, if it exceeded the cgroup
+limitations.
+
+4.2. Buffered I/O (write-back) tracking
+
+For buffered writes the scenario is a bit more complex, because the writes in
+the page cache are processed asynchronously by kernel threads (pdflush), using
+a write-back policy. So the real writes to the underlying block devices occur
+in a different I/O context respect to the task that originally generated the
+dirty pages.
+
+The I/O bandwidth controller uses the following solution to resolve this
+problem.
+
+If the operation is a buffered write, we can charge the right cgroup looking at
+the owner of the first page involved in the I/O operation, that gives the
+context that generated the I/O activity at the source. This information can be
+retrieved using the page_cgroup functionality originally provided by the cgroup
+memory controller [4], and now provided specifically by the bio-cgroup
+controller [5].
+
+In this way we can correctly account the I/O cost to the right cgroup, but we
+cannot throttle the current task in this stage, because, in general, it is a
+different task (e.g., pdflush that is processing asynchronously the dirty
+page).
+
+For this reason, all the write-back requests that are not directly submitted by
+the real owner and that need to be throttled are not dispatched immediately in
+submit_bio(). Instead, they are added into an rbtree and processed
+asynchronously by a dedicated kernel thread: kiothrottled.
+
+A deadline is associated to each throttled write-back request depending on the
+bandwidth usage of the cgroup it belongs. When a request is inserted into the
+rbtree kiothrottled is awakened. This thread periodically selects all the
+requests with an expired deadline and submit the bunch of selected requests to
+the underlying block devices using generic_make_request().
+
+4.3. Usage of bio-cgroup controller
+
+The controller bio-cgroup can be used to track buffered-io (in delay-write
+condition) and for properly apply throttling. The simplest way is to mount
+io-throttle (blockio) and bio-cgroup (bio) together to track buffered-io.
+That's it.
+
+An alternative way is making the use of bio-cgroup id. An association between a
+given io-throttle cgroup and a given bio-cgroup can be built by writing a
+bio-cgroup id to the file blockio.bio_id.
+
+This file is exported for the purpose of associating io-throttle and bio-cgroup
+groups. If you'd like to create an association, you must ensure the io-throttle
+group is empty, that is, there are no tasks in this group. Otherwise,
+association creating will fail. If an association is successfully built, task
+moving in this group will be denied. Of course, you can remove an association,
+just echo an negative number into blockio.bio_id.
+
+In this way, we don't have to necessarily mount io-throttle and bio-cgroup
+together. It's more gentle to the other subsystems who also want to use
+bio-cgroup.
+
+Example:
+* Create an association between an io-throttle group and a bio-cgroup group
+  with "bio" and "blockio" subsystems mounted in different mount points:
+  # mount -t cgroup -o bio bio-cgroup /mnt/bio-cgroup/
+  # cd /mnt/bio-cgroup/
+  # mkdir bio-grp
+  # cat bio-grp/bio.id
+  1
+  # mount -t cgroup -o blockio blockio /mnt/io-throttle
+  # cd /mnt/io-throttle
+  # mkdir foo
+  # echo 1 > foo/blockio.bio_id
+
+* Now move the current shell in the new io-throttle/bio-cgroup group:
+  # echo $$ > /mnt/bio-cgroup/bio-grp/tasks
+
+The task will be also present in /mnt/io-throttle/foo/tasks, due to the
+previous blockio/bio association.
+
+4.4. Per-block device IO limiting rules
+
+Multiple rules for different block devices are stored in a linked list, using
+the dev_t number of each block device as key to uniquely identify each element
+of the list. RCU synchronization is used to protect the whole list structure,
+since the elements in the list are not supposed to change frequently (they
+change only when a new rule is defined or an old rule is removed or updated),
+while the reads in the list occur at each operation that generates I/O. This
+allows to provide zero overhead for cgroups that do not use any limitation.
+
+WARNING: per-block device limiting rules always refer to the dev_t device
+number. If a block device is unplugged (i.e. a USB device) the limiting rules
+defined for that device persist and they are still valid if a new device is
+plugged in the system and it uses the same major and minor numbers.
+
+4.5. Asynchronous I/O (AIO) handling
+
+Explicit sleeps are *not* imposed on tasks doing asynchronous I/O (AIO)
+operations; AIO throttling is performed returning -EAGAIN from sys_io_submit().
+Userspace applications must be able to handle this error code opportunely.
+
+----------------------------------------------------------------------
+5. TODO
+
+* Support proportional I/O bandwidth for an optimal bandwidth usage. For
+  example use the kiothrottled rbtree: all the requests queued to the I/O
+  subsystem first will go into the rbtree; then based on a per-cgroup I/O
+  priority and feedback from I/O schedulers dispatch the requests to the
+  elevator. This would allow to provide both bandwidth limiting and
+  proportional bandwidth functionalities using a generic approach.
+
+* Implement a fair throttling policy: distribute the time to sleep equally
+  among all the tasks of a cgroup that exceeded the I/O limits, e.g., depending
+  of the amount of I/O activity previously generated in the past by each task
+  (see task_io_accounting).
+
+----------------------------------------------------------------------
+6. REFERENCES
+
+[1] Documentation/cgroups/cgroups.txt
+[2] http://en.wikipedia.org/wiki/Leaky_bucket
+[3] http://en.wikipedia.org/wiki/Token_bucket
+[4] Documentation/controllers/memory.txt
+[5] http://people.valinux.co.jp/~ryov/bio-cgroup
-- 
1.5.6.3

WARNING: multiple messages have this Message-ID (diff)
From: Andrea Righi <righi.andrea@gmail.com>
To: Paul Menage <menage@google.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>,
	Gui Jianfeng <guijianfeng@cn.fujitsu.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	agk@sourceware.org, akpm@linux-foundation.org, axboe@kernel.dk,
	baramsori72@gmail.com, Carl Henrik Lunde <chlunde@ping.uio.no>,
	dave@linux.vnet.ibm.com, Divyesh Shah <dpshah@google.com>,
	eric.rannaud@gmail.com, fernando@oss.ntt.co.jp,
	Hirokazu Takahashi <taka@valinux.co.jp>,
	Li Zefan <lizf@cn.fujitsu.com>,
	matt@bluehost.com, dradford@bluehost.com, ngupta@google.com,
	randy.dunlap@oracle.com, roberto@unbit.it,
	Ryo Tsuruta <ryov@valinux.co.jp>,
	Satoshi UCHIDA <s-uchida@ap.jp.nec.com>,
	subrata@linux.vnet.ibm.com, yoshikawa.takuya@oss.ntt.co.jp,
	containers@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org,
	Andrea Righi <righi.andrea@gmail.com>
Subject: [PATCH 1/9] io-throttle documentation
Date: Tue, 14 Apr 2009 22:21:12 +0200	[thread overview]
Message-ID: <1239740480-28125-2-git-send-email-righi.andrea@gmail.com> (raw)
In-Reply-To: <1239740480-28125-1-git-send-email-righi.andrea@gmail.com>

Documentation of the block device I/O controller: description, usage,
advantages and design.

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
---
 Documentation/cgroups/io-throttle.txt |  451 +++++++++++++++++++++++++++++++++
 1 files changed, 451 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/cgroups/io-throttle.txt

diff --git a/Documentation/cgroups/io-throttle.txt b/Documentation/cgroups/io-throttle.txt
new file mode 100644
index 0000000..7650601
--- /dev/null
+++ b/Documentation/cgroups/io-throttle.txt
@@ -0,0 +1,451 @@
+
+               Block device I/O bandwidth controller
+
+----------------------------------------------------------------------
+1. DESCRIPTION
+
+This controller allows to limit the I/O bandwidth of specific block devices for
+specific process containers (cgroups [1]) imposing additional delays on I/O
+requests for those processes that exceed the limits defined in the control
+group filesystem.
+
+Bandwidth limiting rules offer better control over QoS with respect to priority
+or weight-based solutions that only give information about applications'
+relative performance requirements. Nevertheless, priority based solutions are
+affected by performance bursts, when only low-priority requests are submitted
+to a general purpose resource dispatcher.
+
+The goal of the I/O bandwidth controller is to improve performance
+predictability from the applications' point of view and provide performance
+isolation of different control groups sharing the same block devices.
+
+NOTE #1: If you're looking for a way to improve the overall throughput of the
+system probably you should use a different solution.
+
+NOTE #2: The current implementation does not guarantee minimum bandwidth
+levels, the QoS is implemented only slowing down I/O "traffic" that exceeds the
+limits specified by the user; minimum I/O rate thresholds are supposed to be
+guaranteed if the user configures a proper I/O bandwidth partitioning of the
+block devices shared among the different cgroups (theoretically if the sum of
+all the single limits defined for a block device doesn't exceed the total I/O
+bandwidth of that device).
+
+----------------------------------------------------------------------
+2. USER INTERFACE
+
+A new I/O limitation rule is described using the files:
+- blockio.bandwidth-max
+- blockio.iops-max
+
+The I/O bandwidth (blockio.bandwidth-max) can be used to limit the throughput
+of a certain cgroup, while blockio.iops-max can be used to throttle cgroups
+containing applications doing a sparse/seeky I/O workload. Any combination of
+them can be used to define more complex I/O limiting rules, expressed both in
+terms of iops/s and bandwidth.
+
+The same files can be used to set multiple rules for different block devices
+relative to the same cgroup.
+
+The following syntax can be used to configure any limiting rule:
+
+# /bin/echo DEV:LIMIT:STRATEGY:BUCKET_SIZE > CGROUP/FILE
+
+- DEV is the name of the device the limiting rule is applied to.
+
+- LIMIT is the maximum I/O activity allowed on DEV by CGROUP; LIMIT can
+  represent a bandwidth limitation (expressed in bytes/s) when writing to
+  blockio.bandwidth-max, or a limitation to the maximum I/O operations per
+  second (expressed in iops/s) issued by CGROUP.
+
+  A generic I/O limiting rule for a block device DEV can be removed setting the
+  LIMIT to 0.
+
+- STRATEGY is the throttling strategy used to throttle the applications' I/O
+  requests from/to device DEV. At the moment two different strategies can be
+  used [2][3]:
+
+  0 = leaky bucket: the controller accepts at most B bytes (B = LIMIT * time)
+		    or O operations (O = LIMIT * time); further I/O requests
+		    are delayed scheduling a timeout for the tasks that made
+		    those requests.
+
+            Different I/O flow
+               | | |
+               | v |
+               |   v
+               v
+              .......
+              \     /
+               \   /  leaky-bucket
+                ---
+                |||
+                vvv
+             Smoothed I/O flow
+
+  1 = token bucket: LIMIT tokens are added to the bucket every seconds; the
+		    bucket can hold at the most BUCKET_SIZE tokens; I/O
+		    requests are accepted if there are available tokens in the
+		    bucket; when a request of N bytes arrives N tokens are
+		    removed from the bucket; if fewer than N tokens are
+		    available the request is delayed until a sufficient amount
+		    of token is available in the bucket.
+
+            Tokens (I/O rate)
+                o
+                o
+                o
+              ....... <--.
+              \     /    | Bucket size (burst limit)
+               \ooo/     |
+                ---   <--'
+                 |ooo
+    Incoming --->|---> Conforming
+    I/O          |oo   I/O
+    requests  -->|-->  requests
+                 |
+            ---->|
+
+  Leaky bucket is more precise than token bucket to respect the limits, because
+  bursty workloads are always smoothed. Token bucket, instead, allows a small
+  irregularity degree in the I/O flows (burst limit), and, for this, it is
+  better in terms of efficiency (bursty workloads are not smoothed when there
+  are sufficient tokens in the bucket).
+
+- BUCKET_SIZE is used only with token bucket (STRATEGY == 1) and defines the
+  size of the bucket in bytes (blockio.bandwidth-max) or in I/O operations
+  (blockio.iops-max).
+
+- CGROUP is the name of the limited process container.
+
+Also the following syntaxes are allowed:
+
+- remove an I/O bandwidth limiting rule
+# /bin/echo DEV:0 > CGROUP/blockio.bandwidth-max
+
+- configure a limiting rule using leaky bucket throttling (ignore bucket size):
+# /bin/echo DEV:LIMIT:0 > CGROUP/blockio.bandwidth-max
+
+- configure a limiting rule using token bucket throttling
+  (with bucket size == LIMIT):
+# /bin/echo DEV:LIMIT:1 > CGROUP/blockio.bandwidth-max
+
+2.2. Show I/O limiting rules
+
+All the defined rules and statistics for a specific cgroup can be shown reading
+the files blockio.bandwidth-max for bandwidth constraints and blockio.iops-max
+for I/O operations per second constraints.
+
+The following syntax is used:
+
+$ cat CGROUP/blockio.bandwidth-max
+MAJOR MINOR LIMIT STRATEGY LEAKY_STAT BUCKET_SIZE BUCKET_FILL TIME_DELTA
+
+- MAJOR is the major device number of DEV (defined above)
+
+- MINOR is the minor device number of DEV (defined above)
+
+- LIMIT, STRATEGY and BUCKET_SIZE are the same parameters defined above
+
+- LEAKY_STAT is the amount of bytes (blockio.bandwidth-max) or I/O operations
+  (blockio.iops-max) currently allowed by the I/O controller (only used with
+  leaky bucket strategy - STRATEGY == 0)
+
+- BUCKET_FILL represents the amount of tokens present in the bucket (only used
+  with token bucket strategy - STRATEGY == 1)
+
+- TIME_DELTA can be one of the following:
+  - the amount of jiffies elapsed from the last I/O request (token bucket)
+  - the amount of jiffies during which the bytes or the number of I/O
+    operations given by LEAKY_STAT have been accumulated (leaky bucket)
+
+Multiple per-block device rules are reported in multiple rows
+(DEVi, i = 1 ..  n):
+
+$ cat CGROUP/blockio.bandwidth-max
+MAJOR1 MINOR1 BW1 STRATEGY1 LEAKY_STAT1 BUCKET_SIZE1 BUCKET_FILL1 TIME_DELTA1
+MAJOR1 MINOR1 BW2 STRATEGY2 LEAKY_STAT2 BUCKET_SIZE2 BUCKET_FILL2 TIME_DELTA2
+...
+MAJORn MINORn BWn STRATEGYn LEAKY_STATn BUCKET_SIZEn BUCKET_FILLn TIME_DELTAn
+
+The same fields are used to describe I/O operations/sec rules. The only
+difference is that the cost of each I/O operation is scaled up by a factor of
+1000. This allows to apply better fine grained sleeps and provide a more
+precise throttling.
+
+$ cat CGROUP/blockio.iops-max
+MAJOR MINOR LIMITx1000 STRATEGY LEAKY_STATx1000 BUCKET_SIZEx1000 BUCKET_FILLx1000 TIME_DELTA
+...
+
+2.3. Additional I/O statistics
+
+Additional cgroup I/O throttling statistics are reported in
+blockio.throttlecnt:
+
+$ cat CGROUP/blockio.throttlecnt
+MAJOR MINOR BW_COUNTER BW_SLEEP IOPS_COUNTER IOPS_SLEEP
+
+ - MAJOR, MINOR are respectively the major and the minor number of the device
+   the following statistics refer to
+ - BW_COUNTER gives the number of times that the cgroup bandwidth limit of
+   this particular device was exceeded
+ - BW_SLEEP is the amount of sleep time measured in clock ticks (divide
+   by sysconf(_SC_CLK_TCK)) imposed to the processes of this cgroup that
+   exceeded the bandwidth limit for this particular device
+ - IOPS_COUNTER gives the number of times that the cgroup I/O operation per
+   second limit of this particular device was exceeded
+ - IOPS_SLEEP is the amount of sleep time measured in clock ticks (divide
+   by sysconf(_SC_CLK_TCK)) imposed to the processes of this cgroup that
+   exceeded the I/O operations per second limit for this particular device
+
+Example:
+$ cat CGROUP/blockio.throttlecnt
+8 0 0 0 0 0
+^ ^ ^ ^ ^ ^
+ \ \ \ \ \ \___iops sleep (in clock ticks)
+  \ \ \ \ \____iops throttle counter
+   \ \ \ \_____bandwidth sleep (in clock ticks)
+    \ \ \______bandwidth throttle counter
+     \ \_______minor dev. number
+      \________major dev. number
+
+Distinct statistics for each process are reported in
+/proc/PID/io-throttle-stat:
+
+$ cat /proc/PID/io-throttle-stat
+BW_COUNTER BW_SLEEP IOPS_COUNTER IOPS_SLEEP
+
+Example:
+$ cat /proc/$$/io-throttle-stat
+0 0 0 0
+^ ^ ^ ^
+ \ \ \ \_____global iops sleep (in clock ticks)
+  \ \ \______global iops counter
+   \ \_______global bandwidth sleep (clock ticks)
+    \________global bandwidth counter
+
+2.5. Generic usage examples
+
+* Mount the cgroup filesystem (blockio subsystem):
+  # mkdir /mnt/cgroup
+  # mount -t cgroup -oblockio blockio /mnt/cgroup
+
+* Instantiate the new cgroup "foo":
+  # mkdir /mnt/cgroup/foo
+  --> the cgroup foo has been created
+
+* Add the current shell process to the cgroup "foo":
+  # /bin/echo $$ > /mnt/cgroup/foo/tasks
+  --> the current shell has been added to the cgroup "foo"
+
+* Give maximum 1MiB/s of I/O bandwidth on /dev/sda for the cgroup "foo", using
+  leaky bucket throttling strategy:
+  # /bin/echo /dev/sda:$((1024 * 1024)):0:0 > \
+  > /mnt/cgroup/foo/blockio.bandwidth-max
+  # sh
+  --> the subshell 'sh' is running in cgroup "foo" and it can use a maximum I/O
+      bandwidth of 1MiB/s on /dev/sda
+
+* Give maximum 8MiB/s of I/O bandwidth on /dev/sdb for the cgroup "foo", using
+  token bucket throttling strategy, bucket size = 8MiB:
+  # /bin/echo /dev/sdb:$((8 * 1024 * 1024)):1:$((8 * 1024 * 1024)) > \
+  > /mnt/cgroup/foo/blockio.bandwidth-max
+  # sh
+  --> the subshell 'sh' is running in cgroup "foo" and it can use a maximum I/O
+      bandwidth of 1MiB/s on /dev/sda (controlled by leaky bucket throttling)
+      and 8MiB/s on /dev/sdb (controlled by token bucket throttling)
+
+* Run a benchmark doing I/O on /dev/sda and /dev/sdb; I/O limits and usage
+  defined for cgroup "foo" can be shown as following:
+  # cat /mnt/cgroup/foo/blockio.bandwidth-max
+  8 16 8388608 1 0 8388608 -522560 48
+  8 0 1048576 0 737280 0 0 216
+
+* Extend the maximum I/O bandwidth for the cgroup "foo" to 16MiB/s on /dev/sda:
+  # /bin/echo /dev/sda:$((16 * 1024 * 1024)):0:0 > \
+  > /mnt/cgroup/foo/blockio.bandwidth-max
+  # cat /mnt/cgroup/foo/blockio.bandwidth-max
+  8 16 8388608 1 0 8388608 -84432 206436
+  8 0 16777216 0 0 0 0 15212
+
+* Remove limiting rule on /dev/sdb for cgroup "foo":
+  # /bin/echo /dev/sdb:0:0:0 > /mnt/cgroup/foo/blockio.bandwidth-max
+  # cat /mnt/cgroup/foo/blockio.bandwidth-max
+  8 0 16777216 0 0 0 0 110388
+
+* Set a maximum of 100 I/O operations/sec (leaky bucket strategy) to /dev/sdc
+  for cgroup "foo":
+  # /bin/echo /dev/sdc:100:0 > /mnt/cgroup/foo/blockio.iops-max
+  # cat /mnt/cgroup/foo/blockio.iops-max
+  8 32 100000 0 846000 0 2113
+          ^        ^
+         /________/
+        /
+  Remember: these values are scaled up by a factor of 1000 to apply a fine
+  grained throttling (i.e. LIMIT == 100000 means a maximum of 100 I/O operation
+  per second)
+
+* Remove limiting rule for I/O operations from /dev/sdc for cgroup "foo":
+  # /bin/echo /dev/sdc:0 > /mnt/cgroup/foo/blockio.iops-max
+
+----------------------------------------------------------------------
+3. ADVANTAGES OF PROVIDING THIS FEATURE
+
+* Allow I/O traffic shaping for block device shared among different cgroups
+* Improve I/O performance predictability on block devices shared between
+  different cgroups
+* Limiting rules do not depend of the particular I/O scheduler (anticipatory,
+  deadline, CFQ, noop) and/or the type of the underlying block devices
+* The bandwidth limitations are guaranteed both for synchronous and
+  asynchronous operations, even the I/O passing through the page cache or
+  buffers and not only direct I/O (see below for details)
+* It is possible to implement a simple user-space application to dynamically
+  adjust the I/O workload of different process containers at run-time,
+  according to the particular users' requirements and applications' performance
+  constraints
+
+----------------------------------------------------------------------
+4. DESIGN
+
+The I/O throttling is performed imposing an explicit timeout on the processes
+that exceed the I/O limits dedicated to the cgroup they belong to. I/O
+accounting happens per cgroup.
+
+Only the actual I/O that flows in the block devices is considered. Multiple
+re-reads of pages already present in the page cache as well as re-writes of
+dirty pages are not considered to account and throttle the I/O activity, since
+they don't actually generate any real I/O operation.
+
+This means that a process that re-reads or re-writes multiple times the same
+blocks of a file is affected by the I/O limitations only for the actual I/O
+performed from/to the underlying block devices.
+
+4.1. Synchronous I/O tracking and throttling
+
+The io-throttle controller just works as expected for synchronous (read and
+write) operations: the real I/O activity is reduced synchronously according to
+the defined limitations.
+
+If the operation is synchronous we automatically know that the context of the
+request is the current task and so we can charge the cgroup the current task
+belongs to. And throttle the current task as well, if it exceeded the cgroup
+limitations.
+
+4.2. Buffered I/O (write-back) tracking
+
+For buffered writes the scenario is a bit more complex, because the writes in
+the page cache are processed asynchronously by kernel threads (pdflush), using
+a write-back policy. So the real writes to the underlying block devices occur
+in a different I/O context respect to the task that originally generated the
+dirty pages.
+
+The I/O bandwidth controller uses the following solution to resolve this
+problem.
+
+If the operation is a buffered write, we can charge the right cgroup looking at
+the owner of the first page involved in the I/O operation, that gives the
+context that generated the I/O activity at the source. This information can be
+retrieved using the page_cgroup functionality originally provided by the cgroup
+memory controller [4], and now provided specifically by the bio-cgroup
+controller [5].
+
+In this way we can correctly account the I/O cost to the right cgroup, but we
+cannot throttle the current task in this stage, because, in general, it is a
+different task (e.g., pdflush that is processing asynchronously the dirty
+page).
+
+For this reason, all the write-back requests that are not directly submitted by
+the real owner and that need to be throttled are not dispatched immediately in
+submit_bio(). Instead, they are added into an rbtree and processed
+asynchronously by a dedicated kernel thread: kiothrottled.
+
+A deadline is associated to each throttled write-back request depending on the
+bandwidth usage of the cgroup it belongs. When a request is inserted into the
+rbtree kiothrottled is awakened. This thread periodically selects all the
+requests with an expired deadline and submit the bunch of selected requests to
+the underlying block devices using generic_make_request().
+
+4.3. Usage of bio-cgroup controller
+
+The controller bio-cgroup can be used to track buffered-io (in delay-write
+condition) and for properly apply throttling. The simplest way is to mount
+io-throttle (blockio) and bio-cgroup (bio) together to track buffered-io.
+That's it.
+
+An alternative way is making the use of bio-cgroup id. An association between a
+given io-throttle cgroup and a given bio-cgroup can be built by writing a
+bio-cgroup id to the file blockio.bio_id.
+
+This file is exported for the purpose of associating io-throttle and bio-cgroup
+groups. If you'd like to create an association, you must ensure the io-throttle
+group is empty, that is, there are no tasks in this group. Otherwise,
+association creating will fail. If an association is successfully built, task
+moving in this group will be denied. Of course, you can remove an association,
+just echo an negative number into blockio.bio_id.
+
+In this way, we don't have to necessarily mount io-throttle and bio-cgroup
+together. It's more gentle to the other subsystems who also want to use
+bio-cgroup.
+
+Example:
+* Create an association between an io-throttle group and a bio-cgroup group
+  with "bio" and "blockio" subsystems mounted in different mount points:
+  # mount -t cgroup -o bio bio-cgroup /mnt/bio-cgroup/
+  # cd /mnt/bio-cgroup/
+  # mkdir bio-grp
+  # cat bio-grp/bio.id
+  1
+  # mount -t cgroup -o blockio blockio /mnt/io-throttle
+  # cd /mnt/io-throttle
+  # mkdir foo
+  # echo 1 > foo/blockio.bio_id
+
+* Now move the current shell in the new io-throttle/bio-cgroup group:
+  # echo $$ > /mnt/bio-cgroup/bio-grp/tasks
+
+The task will be also present in /mnt/io-throttle/foo/tasks, due to the
+previous blockio/bio association.
+
+4.4. Per-block device IO limiting rules
+
+Multiple rules for different block devices are stored in a linked list, using
+the dev_t number of each block device as key to uniquely identify each element
+of the list. RCU synchronization is used to protect the whole list structure,
+since the elements in the list are not supposed to change frequently (they
+change only when a new rule is defined or an old rule is removed or updated),
+while the reads in the list occur at each operation that generates I/O. This
+allows to provide zero overhead for cgroups that do not use any limitation.
+
+WARNING: per-block device limiting rules always refer to the dev_t device
+number. If a block device is unplugged (i.e. a USB device) the limiting rules
+defined for that device persist and they are still valid if a new device is
+plugged in the system and it uses the same major and minor numbers.
+
+4.5. Asynchronous I/O (AIO) handling
+
+Explicit sleeps are *not* imposed on tasks doing asynchronous I/O (AIO)
+operations; AIO throttling is performed returning -EAGAIN from sys_io_submit().
+Userspace applications must be able to handle this error code opportunely.
+
+----------------------------------------------------------------------
+5. TODO
+
+* Support proportional I/O bandwidth for an optimal bandwidth usage. For
+  example use the kiothrottled rbtree: all the requests queued to the I/O
+  subsystem first will go into the rbtree; then based on a per-cgroup I/O
+  priority and feedback from I/O schedulers dispatch the requests to the
+  elevator. This would allow to provide both bandwidth limiting and
+  proportional bandwidth functionalities using a generic approach.
+
+* Implement a fair throttling policy: distribute the time to sleep equally
+  among all the tasks of a cgroup that exceeded the I/O limits, e.g., depending
+  of the amount of I/O activity previously generated in the past by each task
+  (see task_io_accounting).
+
+----------------------------------------------------------------------
+6. REFERENCES
+
+[1] Documentation/cgroups/cgroups.txt
+[2] http://en.wikipedia.org/wiki/Leaky_bucket
+[3] http://en.wikipedia.org/wiki/Token_bucket
+[4] Documentation/controllers/memory.txt
+[5] http://people.valinux.co.jp/~ryov/bio-cgroup
-- 
1.5.6.3


  parent reply	other threads:[~2009-04-14 20:21 UTC|newest]

Thread overview: 207+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-14 20:21 [PATCH 0/9] cgroup: io-throttle controller (v13) Andrea Righi
     [not found] ` <1239740480-28125-1-git-send-email-righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2009-04-14 20:21   ` Andrea Righi [this message]
2009-04-14 20:21     ` [PATCH 1/9] io-throttle documentation Andrea Righi
2009-04-17  1:24     ` KAMEZAWA Hiroyuki
2009-04-17  1:56       ` Li Zefan
2009-04-17 10:25         ` Andrea Righi
2009-04-17 10:41           ` Andrea Righi
2009-04-17 10:41           ` Andrea Righi
2009-04-17 11:35           ` Fernando Luis Vázquez Cao
2009-04-17 11:35           ` Fernando Luis Vázquez Cao
2009-04-20  9:38           ` Ryo Tsuruta
2009-04-20  9:38           ` Ryo Tsuruta
     [not found]             ` <20090420.183815.226804723.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-20 15:00               ` Andrea Righi
2009-04-20 15:00             ` Andrea Righi
2009-04-27 10:45               ` Ryo Tsuruta
2009-04-27 12:15                 ` Ryo Tsuruta
     [not found]                 ` <20090427.194533.183037823.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-27 12:15                   ` Ryo Tsuruta
2009-04-27 21:56                   ` Andrea Righi
2009-04-27 21:56                     ` Andrea Righi
2009-04-27 10:45               ` Ryo Tsuruta
     [not found]         ` <49E7E1CF.6060209-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-04-17 10:25           ` Andrea Righi
2009-04-17  7:34       ` Gui Jianfeng
2009-04-17  7:43         ` KAMEZAWA Hiroyuki
2009-04-17  9:29           ` Gui Jianfeng
     [not found]           ` <20090417164351.ea85012d.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  9:29             ` Gui Jianfeng
     [not found]         ` <49E8311D.5030901-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-04-17  7:43           ` KAMEZAWA Hiroyuki
     [not found]       ` <20090417102417.88a0ef93.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  1:56         ` Li Zefan
2009-04-17  7:34         ` Gui Jianfeng
2009-04-17  9:55         ` Andrea Righi
2009-04-17  9:55       ` Andrea Righi
     [not found]     ` <1239740480-28125-2-git-send-email-righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2009-04-17  1:24       ` KAMEZAWA Hiroyuki
2009-04-17 17:39       ` Vivek Goyal
2009-04-17 17:39         ` Vivek Goyal
2009-04-17 23:12         ` Andrea Righi
2009-04-19 13:42           ` Vivek Goyal
2009-04-19 13:42             ` Vivek Goyal
     [not found]             ` <20090419134201.GF8493-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-19 15:47               ` Andrea Righi
2009-04-19 15:47             ` Andrea Righi
2009-04-20 21:28               ` Vivek Goyal
2009-04-20 21:28                 ` Vivek Goyal
2009-04-20 22:05                 ` Andrea Righi
2009-04-21  1:08                   ` Vivek Goyal
2009-04-21  1:08                     ` Vivek Goyal
     [not found]                     ` <20090421010846.GA15850-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-21  8:37                       ` Andrea Righi
2009-04-21  8:37                     ` Andrea Righi
2009-04-21 14:23                       ` Vivek Goyal
2009-04-21 14:23                         ` Vivek Goyal
     [not found]                         ` <20090421142305.GB22619-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-21 18:29                           ` Vivek Goyal
2009-04-21 18:29                             ` Vivek Goyal
     [not found]                             ` <20090421182958.GF22619-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-21 21:36                               ` Andrea Righi
2009-04-21 21:36                                 ` Andrea Righi
2009-04-21 21:28                           ` Andrea Righi
2009-04-21 21:28                         ` Andrea Righi
     [not found]                 ` <20090420212827.GA9080-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-20 22:05                   ` Andrea Righi
2009-04-19 13:54           ` Vivek Goyal
2009-04-19 13:54             ` Vivek Goyal
     [not found]         ` <20090417173955.GF29086-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-17 23:12           ` Andrea Righi
2009-04-14 20:21   ` [PATCH 2/9] res_counter: introduce ratelimiting attributes Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-14 20:21   ` [PATCH 3/9] bio-cgroup controller Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-15  2:15     ` KAMEZAWA Hiroyuki
2009-04-15  9:37       ` Andrea Righi
2009-04-15 12:38         ` Ryo Tsuruta
     [not found]           ` <20090415.213850.226770691.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-15 13:23             ` Andrea Righi
2009-04-15 13:23           ` Andrea Righi
2009-04-15 23:58             ` KAMEZAWA Hiroyuki
2009-04-15 23:58             ` KAMEZAWA Hiroyuki
2009-04-16 10:42               ` Andrea Righi
2009-04-16 12:00                 ` Ryo Tsuruta
2009-04-16 12:00                 ` Ryo Tsuruta
2009-04-17  0:04                 ` KAMEZAWA Hiroyuki
2009-04-17  9:44                   ` Andrea Righi
     [not found]                   ` <20090417090451.5ad9022f.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  9:44                     ` Andrea Righi
2009-04-17  0:04                 ` KAMEZAWA Hiroyuki
     [not found]               ` <20090416085814.8b6d077f.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-16 10:42                 ` Andrea Righi
2009-04-15 12:38         ` Ryo Tsuruta
2009-04-15 13:07       ` Andrea Righi
     [not found]       ` <20090415111528.b796519a.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-15  9:37         ` Andrea Righi
2009-04-15 13:07         ` Andrea Righi
2009-04-16 22:29     ` Andrew Morton
2009-04-17  0:20       ` KAMEZAWA Hiroyuki
     [not found]         ` <20090417092040.1c832c69.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  0:44           ` Andrew Morton
2009-04-17  0:44             ` Andrew Morton
2009-04-17  1:44             ` Ryo Tsuruta
2009-04-17  4:15               ` Andrew Morton
     [not found]                 ` <20090416211514.038c5e91.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-17  7:48                   ` Ryo Tsuruta
2009-04-17  7:48                 ` Ryo Tsuruta
     [not found]               ` <20090417.104432.193700511.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-17  4:15                 ` Andrew Morton
2009-04-17  1:50             ` Balbir Singh
     [not found]             ` <20090416174428.6bb5da21.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-17  1:44               ` Ryo Tsuruta
2009-04-17  1:50               ` Balbir Singh
     [not found]       ` <20090416152937.b2188370.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-17  0:20         ` KAMEZAWA Hiroyuki
2009-04-17  9:40         ` Andrea Righi
2009-04-17  9:40       ` Andrea Righi
2009-04-17  1:49     ` Takuya Yoshikawa
2009-04-17  2:24       ` KAMEZAWA Hiroyuki
     [not found]         ` <20090417112433.085ed604.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  7:22           ` Ryo Tsuruta
2009-04-17  7:22         ` Ryo Tsuruta
2009-04-17  8:00           ` KAMEZAWA Hiroyuki
     [not found]             ` <20090417170016.5c7268f1.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  8:48               ` KAMEZAWA Hiroyuki
2009-04-17  8:48             ` KAMEZAWA Hiroyuki
     [not found]               ` <20090417174854.07aeec9f.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  8:51                 ` KAMEZAWA Hiroyuki
2009-04-17  8:51               ` KAMEZAWA Hiroyuki
     [not found]           ` <20090417.162201.183038478.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-17  8:00             ` KAMEZAWA Hiroyuki
2009-04-17 11:27             ` Block I/O tracking (was Re: [PATCH 3/9] bio-cgroup controller) Fernando Luis Vázquez Cao
2009-04-17 11:27           ` Fernando Luis Vázquez Cao
2009-04-17 22:09             ` Andrea Righi
     [not found]             ` <49E8679D.8010405-gVGce1chcLdL9jVzuh4AOg@public.gmane.org>
2009-04-17 22:09               ` Andrea Righi
2009-04-17  7:32       ` [PATCH 3/9] bio-cgroup controller Ryo Tsuruta
     [not found]       ` <49E7E037.9080004-gVGce1chcLdL9jVzuh4AOg@public.gmane.org>
2009-04-17  2:24         ` KAMEZAWA Hiroyuki
2009-04-17  7:32         ` Ryo Tsuruta
2009-04-17 10:22     ` Balbir Singh
     [not found]       ` <20090417102214.GC3896-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-04-20 11:35         ` Ryo Tsuruta
2009-04-20 11:35       ` Ryo Tsuruta
     [not found]         ` <20090420.203540.104031006.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-20 14:56           ` Andrea Righi
2009-04-20 14:56         ` Andrea Righi
2009-04-21 11:39           ` Ryo Tsuruta
2009-04-21 11:39           ` Ryo Tsuruta
2009-04-21 15:31           ` Balbir Singh
2009-04-21 15:31           ` Balbir Singh
     [not found]     ` <1239740480-28125-4-git-send-email-righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2009-04-15  2:15       ` KAMEZAWA Hiroyuki
2009-04-16 22:29       ` Andrew Morton
2009-04-17  1:49       ` Takuya Yoshikawa
2009-04-17 10:22       ` Balbir Singh
2009-04-14 20:21   ` [PATCH 4/9] support checking of cgroup subsystem dependencies Andrea Righi
2009-04-14 20:21   ` [PATCH 5/9] io-throttle controller infrastructure Andrea Righi
2009-04-14 20:21   ` [PATCH 6/9] kiothrottled: throttle buffered (writeback) IO Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-14 20:21   ` [PATCH 7/9] io-throttle instrumentation Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-14 20:21   ` [PATCH 8/9] export per-task io-throttle statistics to userspace Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-14 20:21   ` [PATCH 9/9] ext3: do not throttle metadata and journal IO Andrea Righi
2009-04-14 20:21     ` Andrea Righi
     [not found]     ` <1239740480-28125-10-git-send-email-righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2009-04-17 12:38       ` Theodore Tso
2009-04-17 12:38     ` Theodore Tso
2009-04-17 12:50       ` Jens Axboe
     [not found]         ` <20090417125004.GY4593-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
2009-04-17 14:39           ` Andrea Righi
2009-04-17 14:39         ` Andrea Righi
2009-04-21  0:18           ` Theodore Tso
2009-04-21  8:30             ` Andrea Righi
2009-04-21 14:06               ` Theodore Tso
2009-04-21 14:31                 ` Andrea Righi
2009-04-21 16:35                   ` Theodore Tso
     [not found]                     ` <20090421163537.GI19186-3s7WtUTddSA@public.gmane.org>
2009-04-21 17:23                       ` Balbir Singh
2009-04-21 17:23                     ` Balbir Singh
     [not found]                       ` <20090421172317.GM19637-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-04-21 17:46                         ` Theodore Tso
2009-04-21 17:46                       ` Theodore Tso
     [not found]                         ` <20090421174620.GD15541-3s7WtUTddSA@public.gmane.org>
2009-04-21 18:14                           ` Balbir Singh
2009-04-21 18:14                         ` Balbir Singh
2009-04-21 19:14                           ` Theodore Tso
     [not found]                             ` <20090421191401.GF15541-3s7WtUTddSA@public.gmane.org>
2009-04-21 20:49                               ` Andrea Righi
2009-04-22  3:30                               ` Balbir Singh
2009-04-21 20:49                             ` Andrea Righi
2009-04-22  0:33                               ` KAMEZAWA Hiroyuki
2009-04-22  1:21                                 ` KAMEZAWA Hiroyuki
     [not found]                                   ` <20090422102153.9aec17b9.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-22 10:22                                     ` Andrea Righi
2009-04-22 10:22                                   ` Andrea Righi
2009-04-23  0:05                                     ` KAMEZAWA Hiroyuki
2009-04-23  1:22                                       ` Theodore Tso
     [not found]                                         ` <20090423012254.GZ15541-3s7WtUTddSA@public.gmane.org>
2009-04-23  2:54                                           ` KAMEZAWA Hiroyuki
2009-04-23  2:54                                         ` KAMEZAWA Hiroyuki
     [not found]                                           ` <20090423115419.c493266a.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-23  4:35                                             ` Theodore Tso
2009-04-23  4:35                                           ` Theodore Tso
     [not found]                                             ` <20090423043547.GB2723-3s7WtUTddSA@public.gmane.org>
2009-04-23  4:58                                               ` Andrew Morton
2009-04-23  4:58                                                 ` Andrew Morton
2009-04-23  5:37                                                 ` KAMEZAWA Hiroyuki
     [not found]                                                 ` <20090422215825.f83e1b27.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-23  5:37                                                   ` KAMEZAWA Hiroyuki
2009-04-23  9:44                                               ` Andrea Righi
2009-04-24  5:14                                               ` Balbir Singh
2009-04-23  9:44                                             ` Andrea Righi
2009-04-23 12:17                                               ` Theodore Tso
2009-04-23 12:17                                               ` Theodore Tso
     [not found]                                                 ` <20090423121745.GC2723-3s7WtUTddSA@public.gmane.org>
2009-04-23 12:27                                                   ` Theodore Tso
2009-04-23 12:27                                                     ` Theodore Tso
2009-04-23 21:13                                                   ` Andrea Righi
2009-04-23 21:13                                                     ` Andrea Righi
2009-04-24  0:26                                                     ` KAMEZAWA Hiroyuki
2009-04-24  0:26                                                     ` KAMEZAWA Hiroyuki
2009-04-24  5:14                                             ` Balbir Singh
2009-04-23 10:03                                       ` Andrea Righi
     [not found]                                       ` <20090423090535.ec419269.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-23  1:22                                         ` Theodore Tso
2009-04-23 10:03                                         ` Andrea Righi
2009-04-23  0:05                                     ` KAMEZAWA Hiroyuki
     [not found]                                 ` <20090422093349.1ee9ae82.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-22  1:21                                   ` KAMEZAWA Hiroyuki
2009-04-22  0:33                               ` KAMEZAWA Hiroyuki
2009-04-22  3:30                             ` Balbir Singh
     [not found]                           ` <20090421181429.GO19637-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-04-21 19:14                             ` Theodore Tso
2009-04-21 16:35                   ` Theodore Tso
     [not found]                 ` <20090421140631.GF19186-3s7WtUTddSA@public.gmane.org>
2009-04-21 14:31                   ` Andrea Righi
2009-04-24 15:10                   ` Balbir Singh
2009-04-24 15:10                 ` Balbir Singh
2009-04-21 14:06               ` Theodore Tso
     [not found]             ` <20090421001822.GB19186-3s7WtUTddSA@public.gmane.org>
2009-04-21  8:30               ` Andrea Righi
2009-04-21  0:18           ` Theodore Tso
     [not found]       ` <20090417123805.GC7117-3s7WtUTddSA@public.gmane.org>
2009-04-17 12:50         ` Jens Axboe
2009-04-16 22:24   ` [PATCH 0/9] cgroup: io-throttle controller (v13) Andrew Morton
2009-04-16 22:24     ` Andrew Morton
     [not found]     ` <20090416152433.aaaba300.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-17  9:37       ` Andrea Righi
2009-04-17  9:37         ` Andrea Righi
2009-04-30 13:20   ` Alan D. Brunelle
2009-04-14 20:21 ` [PATCH 4/9] support checking of cgroup subsystem dependencies Andrea Righi
2009-04-14 20:21 ` [PATCH 5/9] io-throttle controller infrastructure Andrea Righi
2009-04-30 13:20 ` [PATCH 0/9] cgroup: io-throttle controller (v13) Alan D. Brunelle
     [not found]   ` <49F9A5BA.9030100-VXdhtT5mjnY@public.gmane.org>
2009-05-01 11:11     ` Andrea Righi
2009-05-01 11:11   ` Andrea Righi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1239740480-28125-2-git-send-email-righi.andrea@gmail.com \
    --to=righi.andrea-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
    --cc=agk-9JcytcrH/bA+uJoB2kUjGw@public.gmane.org \
    --cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
    --cc=axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org \
    --cc=balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    --cc=chlunde-om2ZC0WAoZIXWF+eFR7m5Q@public.gmane.org \
    --cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=dave-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    --cc=dradford-cT2on/YLNlBWk0Htik3J/w@public.gmane.org \
    --cc=eric.rannaud-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=fernando-gVGce1chcLdL9jVzuh4AOg@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=matt-cT2on/YLNlBWk0Htik3J/w@public.gmane.org \
    --cc=menage-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=ngupta-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=randy.dunlap-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
    --cc=roberto-5KDOxZqKugI@public.gmane.org \
    --cc=subrata-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.