From: NeilBrown <neilb@suse.com>
To: Jens Axboe <axboe@fb.com>
Cc: linux-block@vger.kernel.org, linux-mm@kvack.org,
LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH] loop: Add PF_LESS_THROTTLE to block/loop device thread.
Date: Mon, 03 Apr 2017 11:18:51 +1000 [thread overview]
Message-ID: <871staffus.fsf@notabene.neil.brown.name> (raw)
[-- Attachment #1: Type: text/plain, Size: 2128 bytes --]
When a filesystem is mounted from a loop device, writes are
throttled by balance_dirty_pages() twice: once when writing
to the filesystem and once when the loop_handle_cmd() writes
to the backing file. This double-throttling can trigger
positive feedback loops that create significant delays. The
throttling at the lower level is seen by the upper level as
a slow device, so it throttles extra hard.
The PF_LESS_THROTTLE flag was created to handle exactly this
circumstance, though with an NFS filesystem mounted from a
local NFS server. It reduces the throttling on the lower
layer so that it can proceed largely unthrottled.
To demonstrate this, create a filesystem on a loop device
and write (e.g. with dd) several large files which combine
to consume significantly more than the limit set by
/proc/sys/vm/dirty_ratio or dirty_bytes. Measure the total
time taken.
When I do this directly on a device (no loop device) the
total time for several runs (mkfs, mount, write 200 files,
umount) is fairly stable: 28-35 seconds.
When I do this over a loop device the times are much worse
and less stable. 52-460 seconds. Half below 100seconds,
half above.
When I apply this patch, the times become stable again,
though not as fast as the no-loop-back case: 53-72 seconds.
There may be room for further improvement as the total overhead still
seems too high, but this is a big improvement.
Signed-off-by: NeilBrown <neilb@suse.com>
---
drivers/block/loop.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 0ecb6461ed81..a7e1dd215fc2 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1694,8 +1694,11 @@ static void loop_queue_work(struct kthread_work *work)
{
struct loop_cmd *cmd =
container_of(work, struct loop_cmd, work);
+ int oldflags = current->flags & PF_LESS_THROTTLE;
+ current->flags |= PF_LESS_THROTTLE;
loop_handle_cmd(cmd);
+ current->flags = (current->flags & ~PF_LESS_THROTTLE) | oldflags;
}
static int loop_init_request(void *data, struct request *rq,
--
2.12.0
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]
next reply other threads:[~2017-04-03 1:19 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-03 1:18 NeilBrown [this message]
2017-04-04 7:10 ` [PATCH] loop: Add PF_LESS_THROTTLE to block/loop device thread Christoph Hellwig
2017-04-05 4:27 ` NeilBrown
2017-04-05 5:13 ` Ming Lei
2017-04-04 11:23 ` Michal Hocko
2017-04-04 14:24 ` Ming Lei
2017-04-05 4:31 ` NeilBrown
2017-04-05 4:33 ` [PATCH v2] " NeilBrown
2017-04-05 5:05 ` Ming Lei
2017-04-05 7:19 ` Michal Hocko
2017-04-05 7:32 ` Michal Hocko
2017-04-06 2:23 ` NeilBrown
2017-04-06 6:53 ` Michal Hocko
2017-04-06 23:47 ` [PATCH v3] " NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=871staffus.fsf@notabene.neil.brown.name \
--to=neilb@suse.com \
--cc=axboe@fb.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).