From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57581) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YlDMb-0003z2-3P for qemu-devel@nongnu.org; Thu, 23 Apr 2015 05:24:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YlDMW-0005Sz-U1 for qemu-devel@nongnu.org; Thu, 23 Apr 2015 05:24:05 -0400 Received: from relay.parallels.com ([195.214.232.42]:56075) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YlDMW-0005SZ-M2 for qemu-devel@nongnu.org; Thu, 23 Apr 2015 05:24:00 -0400 Message-ID: <5538BA1F.5050002@openvz.org> Date: Thu, 23 Apr 2015 12:23:43 +0300 From: "Denis V. Lunev" MIME-Version: 1.0 References: <1426069701-1405-1-git-send-email-den@openvz.org> <1426069701-1405-6-git-send-email-den@openvz.org> <20150422123905.GE27617@stefanha-thinkpad.redhat.com> <5537972F.7040808@openvz.org> <20150423090303.GB8811@stefanha-thinkpad.redhat.com> In-Reply-To: <20150423090303.GB8811@stefanha-thinkpad.redhat.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 05/27] block/parallels: add get_block_status List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: Kevin Wolf , Stefan Hajnoczi , qemu-devel@nongnu.org, Roman Kagan On 23/04/15 12:03, Stefan Hajnoczi wrote: > On Wed, Apr 22, 2015 at 03:42:23PM +0300, Denis V. Lunev wrote: >> On 22/04/15 15:39, Stefan Hajnoczi wrote: >>> On Wed, Mar 11, 2015 at 01:27:59PM +0300, Denis V. Lunev wrote: >>>> +static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs, >>>> + int64_t sector_num, int nb_sectors, int *pnum) >>>> +{ >>>> + BDRVParallelsState *s = bs->opaque; >>>> + int64_t offset; >>>> + >>>> + qemu_co_mutex_lock(&s->lock); >>>> + offset = seek_to_sector(s, sector_num); >>>> + qemu_co_mutex_unlock(&s->lock); >>> The lock isn't necessary here yet. It may become necessary when write >>> support is added, but probably not even then since seek_to_sector() >>> cannot yield (it's not a coroutine function), so there are no possible >>> races with other coroutines. >>> >>> The same also applies for parallels_co_read(). The lock there isn't >>> necessary since the block driver is read-only and has no mutable state - >>> there is no shared state that needs lock protection. >> do you propose to drop the lock here and add it later with write >> support (patch 08)? I'd prefer to stay on the safe side. Yes, the >> lock is not mandatory at the moment but in 2 patches it will be >> mandatory. > This locking approach is very conservative. At the end of the series, > the only region that needs a lock is allocate_clusters() because > bdrv_write_zeroes() may yield before s->bat_bitmap[] is assigned - we > need to prevent simultaneous allocate_clusters() calls to the same > clusters. > > I'm fine with the conservative approach, but please add a doc comment to > BDRVParallelsState.lock explaining what this lock protects. This way > people reading the code will understand your philosophy and be able to > follow it when modifying the code. > > Stefan ok. sounds good for me. I would like to implement the code as conservative as possible at the beginning and place optimizations later step-by-step to have a clear path to bisect the introductory patch. At the moment I do not think that this locking scheme will affect the performance but I can be wrong. There are a lot of stuff to be implemented from the functional point of view before this will be needed to tweak from my point of view.