qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Max Reitz <mreitz@redhat.com>
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
	"qemu-block@nongnu.org" <qemu-block@nongnu.org>
Cc: "kwolf@redhat.com" <kwolf@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Denis Lunev <den@virtuozzo.com>
Subject: Re: [PATCH v5 0/5] qcow2: async handling of fragmented io
Date: Fri, 20 Sep 2019 15:10:11 +0200	[thread overview]
Message-ID: <9b56ef11-8c1e-fa48-d838-4fe3ee043474@redhat.com> (raw)
In-Reply-To: <ea14f4bc-9a0c-0147-e963-9019fc9f4f2b@virtuozzo.com>


[-- Attachment #1.1: Type: text/plain, Size: 2939 bytes --]

On 20.09.19 14:53, Vladimir Sementsov-Ogievskiy wrote:
> 20.09.2019 15:40, Max Reitz wrote:
>> On 20.09.19 13:53, Vladimir Sementsov-Ogievskiy wrote:
>>> 20.09.2019 14:10, Max Reitz wrote:
>>>> On 16.09.19 19:53, Vladimir Sementsov-Ogievskiy wrote:
>>>>> Hi all!
>>>>>
>>>>> Here is an asynchronous scheme for handling fragmented qcow2
>>>>> reads and writes. Both qcow2 read and write functions loops through
>>>>> sequential portions of data. The series aim it to parallelize these
>>>>> loops iterations.
>>>>> It improves performance for fragmented qcow2 images, I've tested it
>>>>> as described below.
>>>>
>>>> Thanks again, applied to my block branch:
>>>>
>>>> https://git.xanclic.moe/XanClic/qemu/commits/branch/block
>>>
>>> Thanks a lot!
>>>
>>>>
>>>>> v5: fix 026 and rebase on Max's block branch [perf results not updated]:
>>>>>
>>>>> 01: new, prepare 026 to not fail
>>>>> 03: - drop read_encrypted blkdbg event [Kevin]
>>>>>       - assert((x & (BDRV_SECTOR_SIZE - 1)) == 0) -> assert(QEMU_IS_ALIGNED(x, BDRV_SECTOR_SIZE)) [rebase]
>>>>>       - full host offset in argument of qcow2_co_decrypt [rebase]
>>>>> 04: - substitute remaining qcow2_co_do_pwritev by qcow2_co_pwritev_task in comment [Max]
>>>>>       - full host offset in argument of qcow2_co_encrypt [rebase]
>>>>> 05: - Now patch don't affect 026 iotest, so its output is not changed
>>>>>
>>>>> Rebase changes seems trivial, so, I've kept r-b marks.
>>>>
>>>> (For the record, I didn’t consider them trivial, or I’d’ve applied
>>>> Maxim’s series on top of yours.  I consider a conflict to be trivially
>>>> resolvable only if there is only one way of doing it; but when I
>>>> resolved the conflicts myself, I resolved the one in patch 3 differently
>>>> from you – I added an offset_in_cluster variable to
>>>> qcow2_co_preadv_encrypted().  Sure, it’s still simple and the difference
>>>> is minor, but that was exactly where I thought that I can’t consider
>>>> this trivial.)
>>>>
>>>
>>> Hmm. May be it's trivial enough to keep r-b (as my change is trivial itself), but not
>>> trivial enough to change alien patch on queuing? If you disagree, I'll be more
>>> careful on keeping r-b in changed patches, sorry.
>>
>> It doesn’t matter much to me, I diff all patches anyway. :-)
>>
> 
> then a bit offtopic:
> 
> Which tools are you use?
> 
> I've some scripts to compare different versions of one serie (or to check, what
> was changed in patches during some porting process..).. The core thing is to filter
> some not interesting numbers and hashes, which makes diffs dirty, and then call vimdiff.
> But maybe I've reinvented the wheel.

Just kompare as a graphical diff tool; I just scroll past the hash diffs.

But now that you gave me the idea, maybe I should write a script to
filter them...  (So, no, I don’t know of a tool that would do that
already :-/)

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2019-09-20 13:12 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-16 17:53 [Qemu-devel] [PATCH v5 0/5] qcow2: async handling of fragmented io Vladimir Sementsov-Ogievskiy
2019-09-16 17:53 ` [Qemu-devel] [PATCH v5 1/5] qemu-iotests: ignore leaks on failure paths in 026 Vladimir Sementsov-Ogievskiy
2019-09-16 17:53 ` [Qemu-devel] [PATCH v5 2/5] block: introduce aio task pool Vladimir Sementsov-Ogievskiy
2019-09-16 17:53 ` [Qemu-devel] [PATCH v5 3/5] block/qcow2: refactor qcow2_co_preadv_part Vladimir Sementsov-Ogievskiy
2019-09-16 17:53 ` [Qemu-devel] [PATCH v5 4/5] block/qcow2: refactor qcow2_co_pwritev_part Vladimir Sementsov-Ogievskiy
2019-09-16 17:53 ` [Qemu-devel] [PATCH v5 5/5] block/qcow2: introduce parallel subrequest handling in read and write Vladimir Sementsov-Ogievskiy
2019-09-17  9:32 ` [Qemu-devel] [PATCH v5 0/5] qcow2: async handling of fragmented io Vladimir Sementsov-Ogievskiy
2019-09-20 11:10 ` Max Reitz
2019-09-20 11:53   ` Vladimir Sementsov-Ogievskiy
2019-09-20 12:40     ` Max Reitz
2019-09-20 12:53       ` Vladimir Sementsov-Ogievskiy
2019-09-20 13:10         ` Max Reitz [this message]
2019-09-20 13:26           ` Vladimir Sementsov-Ogievskiy
2019-09-20 13:29             ` Max Reitz
2019-09-20 14:17         ` Eric Blake
2019-09-20 14:39           ` Vladimir Sementsov-Ogievskiy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9b56ef11-8c1e-fa48-d838-4fe3ee043474@redhat.com \
    --to=mreitz@redhat.com \
    --cc=den@virtuozzo.com \
    --cc=kwolf@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=vsementsov@virtuozzo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).