All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] qcow2 performance plan
@ 2010-09-14 13:07 Avi Kivity
  2010-09-14 13:43 ` Anthony Liguori
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Avi Kivity @ 2010-09-14 13:07 UTC (permalink / raw)
  To: qemu-devel, Kevin Wolf

  Here's a draft of a plan that should improve qcow2 performance.  It's 
written in wiki syntax for eventual upload to wiki.qemu.org; lines 
starting with # are numbered lists, not comments.

= Basics =

At the minimum level, no operation should block the main thread.  This
could be done in two ways: extending the state machine so that each
blocking operation can be performed asynchronously (<code>bdrv_aio_*</code>)
or by threading: each new operation is handed off to a worker thread.
Since a full state machine is prohibitively complex, this document
will discuss threading.

== Basic threading strategy ==

A first iteration of qcow2 threading adds a single mutex to an image.
The existing qcow2 code is then executed within a worker thread,
acquiring the mutex before starting any operation and releasing it
after completion.  Concurrent operations will simply block until the
operation is complete.  For operations which are already asynchronous,
the blocking time will be negligible since the code will call
<code>bdrv_aio_{read,write}</code> and return, releasing the mutex.
The immediate benefit is that currently blocking operations no long block
the main thread, instead they just block the block operation which is
blocking anyway.

== Eliminating the threading penalty ==

We can eliminate pointless context switches by using the worker thread
context we're in to issue the I/O.  This is trivial for synchronous calls
(<code>bdrv_read</code> and <code>bdrv_write</code>); we simply issue 
the I/O
from the same thread we're currently in.  The underlying raw block format
driver threading code needs to recognize we're in a worker thread context so
it doesn't need to use a worker thread of its own; perhaps using a thread
variable to see if it is in the main thread or an I/O worker thread.

For asynchronous operations, this is harder.  We may add a
<code>bdrv_queue_aio_read</code> and <code>bdrv_queue_aio_write</code> if
to replace a

     bdrv_aio_read()
     mutex_unlock(bs.mutex)
     return;

sequence.  Alternatively, we can just eliminate asynchronous calls.  To
retain concurrency we drop the mutex while performing the operation:
an convert a <code>bdrv_aio_read</code> to:

     mutex_unlock(bs.mutex)
     bdrv_read()
     mutex_lock(bs.mutex)

This allows the operations to proceed in parallel.

For asynchronous metadata operations, the code is simplified considerably.
Dependency lists that are maintained in metadata caches are replaced by a
mutex; instead of adding an operation to a dependency list, acquire the 
mutex.
Then issue your metadata update synchronously.  If there is a lot of 
contention
on the resource, we can batch all updates into a single write:

    mutex_lock(l1.mutex)
    if not l1.dirty:
        l1.future = l1.data
        l1.dirty = True
    l1.future[idx] = cluster
    mutex_lock(l1.write_mutex)
    if l1.dirty:
        tmp = l1.future
        mutex_unlock(l1.mutex)
        bdrv_write(tmp)
        sync
        mutex_lock(l1.mutex)
        l1.dirty = tmp != l1.future
    mutex_unlock(l1.write_mutex)

== Special casing linux-aio ==

There is one case where a worker thread approach is detrimental:
<code>cache=none</code> together with <code>aio=native</code>.  We can solve
this by checking for the case where we're ready to issue the operation with
no metadata I/O:

     if mutex_trylock(bs.mutex):
        m = metadata_loopup(offset, length)
        if m:
            bdrv_aio_read(bs, m, offset, length, callback) # or write
            mutex_unlock(bs.mutex)
            return
     queue_task(operation, offset, length, callback)

= Speeding up allocation =

When a write grows a qcow2 image, the following operations take place:

# clusters are allocated, and the refcount table is updated to reflect this
# sync to ensure the allocation is committed
# the data is written to the clusters
# the L2 table is located; if it doesn't exist, it is allocated and linked
# the L2 table is updated
# sync to ensure the L2->data pointer is committed

We can avoid the first sync by maintaining a volatile list of allocated
but not yet linked clusters.  This requires a tradeoff between the risk of
losing those clusters on an abort, and the performance gain.  To 
minimize the
risk, the list is flushed if there is no demand for it.

# we maintain low and high theresholds for the volatile free list
# if we're under the low threshold, we start a task to allocate clusters 
up to the midpoint
# if we're above the high threshold, we start a task to return clusters 
down to the midpoint
# if we ever need a cluster (extent) and find that the volatile list is 
empty, we double the low and thresholds (up to a limit)
# once a second, we decrease the thresholds by 25%

This ensures that sustained writes will not block on allocation.

Note that a lost cluster is simply leaked; no data loss is involved.  
The free list can be rebuilt if an unclean shutdown is detected.  Older 
implementations can ignore this those leaks.  To transport an image, it 
is recommended to run qemu-img to reclaim any clusters in case it was 
shut down uncleanly.

== Alternative implementation ==

We can avoid a volatile list by relying on guest concurrency.  We replace
<code>bdrv_aio_write</code> by <code>bdrv_aio_submit</code>, which issues
many operations in parallel (but completes each one separately).  This 
mimics
SCSI and virtio devices, which can trigger multiple ops with a single call
to the hardware.  We make a first pass over all write operations, seeing how
many clusters need to be allocated, allocate that in a single operation, 
then
submit all of the allocating writes. Reads and non-allocating writes can
proceed in parallel.

Note that this implementation (as well as the current qcow2 code) may 
leak clusters if qemu aborts in the wrong place.  Avoiding leaks 
completely requires either journalling, allocate-on-write, or a free 
list rebuild.  The first two are slow due the need for barriers.

= Avoiding L2 syncs =

Currently after updating an L2 table with a cluster pointer, we sync to 
avoid
loss of a cluster.  We can avoid this since the guest is required to sync
if it wants to ensure the data is on disk.  We need only to sync if we UNMAP
the cluster, before we free it in the refcount table.

= Copying L1 tables =

qcow2 requires copying of L1 tables in two cases: taking a snapshot, and 
growing the physical image size beyond a certain boundary.  Since L1s 
are relatively small, even for very large images, and growing L1 is very 
rare, we can exclude all write operations by having a global 
shared/exclusive lock taken for shared access by write operations, and 
for exclusive access by grow/snapshot operations.

If concurrent growing and writing is desired, it can be achieved by 
having a thread copy L1, and requiring each L1 update to update both 
copies (for the region already copied) or just the source (for the 
region that was not yet copied).

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2010-09-14 19:09 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-14 13:07 [Qemu-devel] qcow2 performance plan Avi Kivity
2010-09-14 13:43 ` Anthony Liguori
2010-09-14 15:11   ` Kevin Wolf
2010-09-14 15:20     ` Anthony Liguori
2010-09-14 15:47       ` Kevin Wolf
2010-09-14 16:03         ` Stefan Hajnoczi
2010-09-14 16:16           ` Anthony Liguori
2010-09-14 16:28             ` Avi Kivity
2010-09-14 17:08               ` Anthony Liguori
2010-09-14 17:23                 ` Avi Kivity
2010-09-14 18:58                   ` Anthony Liguori
2010-09-14 14:01 ` [Qemu-devel] " Kevin Wolf
2010-09-14 15:14   ` Stefan Hajnoczi
2010-09-14 15:25 ` [Qemu-devel] " Anthony Liguori
2010-09-14 16:30   ` Avi Kivity

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.