linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Moyer <jmoyer@redhat.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: axboe@kernel.dk, vgoyal@redhat.com, linux-kernel@vger.kernel.org,
	linux-ext4@vger.kernel.org
Subject: Re: [PATCH 1/3] block: Implement a blk_yield function to voluntarily give up the I/O scheduler.
Date: Wed, 23 Jun 2010 10:50:28 -0400	[thread overview]
Message-ID: <x49bpb1pzuz.fsf@segfault.boston.devel.redhat.com> (raw)
In-Reply-To: <20100622220406.690bb0c8.akpm@linux-foundation.org> (Andrew Morton's message of "Tue, 22 Jun 2010 22:04:06 -0700")

Andrew Morton <akpm@linux-foundation.org> writes:

> On Tue, 22 Jun 2010 17:35:00 -0400 Jeff Moyer <jmoyer@redhat.com> wrote:
>
>> This patch implements a blk_yield to allow a process to voluntarily
>> give up its I/O scheduler time slice.  This is desirable for those processes
>> which know that they will be blocked on I/O from another process, such as
>> the file system journal thread.  Following patches will put calls to blk_yield
>> into jbd and jbd2.
>> 
>
> I'm looking through this patch series trying to find the
> analysis/description of the cause for this (bad) performance problem. 
> But all I'm seeing is implementation stuff :(  It's hard to review code
> with your eyes shut.

Sorry about that, Andrew.  The problem case is (for example) iozone when
run with small file sizes (up to 8MB) configured to fsync after each
file is written.  Because the iozone process is issuing synchronous
writes, it is put onto CFQ's SYNC service tree.  The significance of
this is that CFQ will idle for up to 8ms waiting for requests on such
queues.  So, what happens is that the iozone process will issue, say,
64KB worth of write I/O.  That I/O will just land in the page cache.
Then, the iozone process does an fsync which forces those I/Os to disk
as synchronous writes.  Then, the file system's fsync method is invoked,
and for ext3/4, it calls log_start_commit followed by log_wait_commit.
Because those synchronous writes were forced out in the context of the
iozone process, CFQ will now idle on iozone's cfqq waiting for more I/O.
However, iozone's progress is gated by the journal thread, now.

So, I tried two approaches to solving the problem.  The first, which
Christoph brought up again in this thread, was to simply mark all
journal I/O as BIO_RW_META, which would cause the iozone process' cfqq
to be preempted when the journal issued its I/O.  However, Vivek pointed
out that this was bad for interactive performance.

The second approach, of which this is the fourth iteration, was to allow
the file system to explicitly tell the I/O scheduler that it is waiting
on I/O from another process.

Does that help?  Let me know if you have any more questions, and thanks
a ton for looking at this, Andrew.  I appreciate it.

The comments I've elided from my response make perfect sense, so I'll
address them in the next posting.

>>  };
>>  #define CFQ_RB_ROOT	(struct cfq_rb_root) { .rb = RB_ROOT, .left = NULL, \
>> -			.count = 0, .min_vdisktime = 0, }
>> +			.count = 0, .min_vdisktime = 0, .last_expiry = 0UL, \
>> +			.last_pid = (pid_t)-1, }
>
> May as well leave the 0 and NULL fields unmentioned (ie: don't do
> crappy stuff because the old code did crappy stuff!)

I don't actually understand why you take issue with this.

>> +{
>> +	struct cfq_data *cfqd = q->elevator->elevator_data;
>> +	struct cfq_io_context *cic, *new_cic;
>> +	struct cfq_queue *cfqq;
>> +
>> +	cic = cfq_cic_lookup(cfqd, current->io_context);
>> +	if (!cic)
>> +		return;
>> +
>> +	task_lock(tsk);
>> +	new_cic = cfq_cic_lookup(cfqd, tsk->io_context);
>> +	atomic_long_inc(&tsk->io_context->refcount);
>
> How do we know tsk has an io_context?  Use get_io_context() and check
> its result?

I'll fix that up.  It works now only by luck (and the fact that there's
a good chance the journal thread has an i/o context).

>> +	task_unlock(tsk);
>> +	if (!new_cic)
>> +		goto out_dec;
>> +
>> +	spin_lock_irq(q->queue_lock);
>> +
>> +	cfqq = cic_to_cfqq(cic, 1);
>> +	if (!cfqq)
>> +		goto out_unlock;
>> +
>> +	/*
>> +	 * If we are currently servicing the SYNC_NOIDLE_WORKLOAD, and we
>> +	 * are idling on the last queue in that workload, *and* there are no
>> +	 * potential dependent readers running currently, then go ahead and
>> +	 * yield the queue.
>> +	 */
>
> Comment explains the code, but doesn't explain the *reason* for the code.

Actually, it explains more than just what the code does.  It would be
difficult for one to divine that the code actually only really cares
about breaking up a currently running dependent reader.  I'll see if I
can make that more clear.

Cheers,
Jeff

  reply	other threads:[~2010-06-23 14:51 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-06-22 21:34 [PATCH 0/3 v5][RFC] ext3/4: enhance fsync performance when using CFQ Jeff Moyer
2010-06-22 21:35 ` [PATCH 1/3] block: Implement a blk_yield function to voluntarily give up the I/O scheduler Jeff Moyer
2010-06-23  5:04   ` Andrew Morton
2010-06-23 14:50     ` Jeff Moyer [this message]
2010-06-24  0:46   ` Vivek Goyal
2010-06-25 16:51     ` Jeff Moyer
2010-06-25 18:55       ` Jens Axboe
2010-06-25 19:57         ` Jeff Moyer
2010-06-25 20:02       ` Vivek Goyal
2010-06-22 21:35 ` [PATCH 2/3] jbd: yield the device queue when waiting for commits Jeff Moyer
2010-06-22 21:35 ` [PATCH 3/3] jbd2: yield the device queue when waiting for journal commits Jeff Moyer
2010-06-22 22:13 ` [PATCH 0/3 v5][RFC] ext3/4: enhance fsync performance when using CFQ Joel Becker
2010-06-23  9:20 ` Christoph Hellwig
2010-06-23 13:03   ` Jeff Moyer
2010-06-23  9:30 ` Tao Ma
2010-06-23 13:06   ` Jeff Moyer
2010-06-24  5:54   ` Tao Ma
2010-06-24 14:56     ` Jeff Moyer
2010-06-27 13:48     ` Jeff Moyer
2010-06-28  6:41       ` Tao Ma
2010-06-28 13:58         ` Jeff Moyer
2010-06-28 23:16           ` Tao Ma
2010-06-29 14:56         ` Jeff Moyer
2010-06-30  0:31           ` Tao Ma

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=x49bpb1pzuz.fsf@segfault.boston.devel.redhat.com \
    --to=jmoyer@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).