qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
To: Stefan Reiter <s.reiter@proxmox.com>,
	qemu-devel@nongnu.org, qemu-block@nongnu.org
Cc: kwolf@redhat.com, slp@redhat.com, mreitz@redhat.com,
	stefanha@redhat.com, jsnow@redhat.com, dietmar@proxmox.com
Subject: Re: [PATCH v2 1/3] backup: don't acquire aio_context in backup_clean
Date: Fri, 27 Mar 2020 09:00:23 +0300	[thread overview]
Message-ID: <ec317e02-1f34-9219-e2c5-13d352f64d82@virtuozzo.com> (raw)
In-Reply-To: <20200326155628.859862-2-s.reiter@proxmox.com>

26.03.2020 18:56, Stefan Reiter wrote:
> All code-paths leading to backup_clean (via job_clean) have the job's
> context already acquired. The job's context is guaranteed to be the same
> as the one used by backup_top via backup_job_create.

As we already discussed, this is not quite right. So, may be this patch should
be the last one...

> 
> Since the previous logic effectively acquired the lock twice, this
> broke cleanup of backups for disks using IO threads, since the BDRV_POLL_WHILE
> in bdrv_backup_top_drop -> bdrv_do_drained_begin would only release the lock
> once, thus deadlocking with the IO thread.
> 
> This is a partial revert of 0abf2581717a19.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>   block/backup.c | 4 ----
>   1 file changed, 4 deletions(-)
> 
> diff --git a/block/backup.c b/block/backup.c
> index 7430ca5883..a7a7dcaf4c 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -126,11 +126,7 @@ static void backup_abort(Job *job)
>   static void backup_clean(Job *job)
>   {
>       BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
> -    AioContext *aio_context = bdrv_get_aio_context(s->backup_top);
> -
> -    aio_context_acquire(aio_context);
>       bdrv_backup_top_drop(s->backup_top);
> -    aio_context_release(aio_context);
>   }
>   
>   void backup_do_checkpoint(BlockJob *job, Error **errp)
> 


-- 
Best regards,
Vladimir


  reply	other threads:[~2020-03-27  6:01 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-26 15:56 [PATCH v2 0/3] Fix some AIO context locking in jobs Stefan Reiter
2020-03-26 15:56 ` [PATCH v2 1/3] backup: don't acquire aio_context in backup_clean Stefan Reiter
2020-03-27  6:00   ` Vladimir Sementsov-Ogievskiy [this message]
2020-03-26 15:56 ` [PATCH v2 2/3] job: take each job's lock individually in job_txn_apply Stefan Reiter
2020-03-26 15:56 ` [PATCH v2 3/3] replication: acquire aio context before calling job_cancel_sync Stefan Reiter
2020-03-26 17:11 ` [PATCH v2 0/3] Fix some AIO context locking in jobs no-reply
2020-03-26 17:18 ` no-reply
2020-03-27  6:07 ` Dietmar Maurer
2020-03-27  6:13   ` Dietmar Maurer
2020-03-27  7:21   ` Dietmar Maurer
2020-03-27 10:00 ` Dietmar Maurer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ec317e02-1f34-9219-e2c5-13d352f64d82@virtuozzo.com \
    --to=vsementsov@virtuozzo.com \
    --cc=dietmar@proxmox.com \
    --cc=jsnow@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=s.reiter@proxmox.com \
    --cc=slp@redhat.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).