From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60285) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YGbX7-00049T-8U for qemu-devel@nongnu.org; Wed, 28 Jan 2015 17:56:31 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YGbWz-0003IZ-Bv for qemu-devel@nongnu.org; Wed, 28 Jan 2015 17:56:25 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37932) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YGbWz-0003IU-5e for qemu-devel@nongnu.org; Wed, 28 Jan 2015 17:56:17 -0500 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id t0SMuGRJ020317 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL) for ; Wed, 28 Jan 2015 17:56:16 -0500 Message-ID: <54C9690E.7040701@redhat.com> Date: Wed, 28 Jan 2015 17:56:14 -0500 From: Max Reitz MIME-Version: 1.0 References: <1422300468-16216-1-git-send-email-mreitz@redhat.com> <1422300468-16216-5-git-send-email-mreitz@redhat.com> <54C6A622.9010902@redhat.com> <54C6A6E4.8060000@redhat.com> <54C6AD3D.60808@redhat.com> <54C6AE02.4030105@redhat.com> <54C6B006.9000607@redhat.com> <54C95D44.2060104@redhat.com> In-Reply-To: <54C95D44.2060104@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 04/21] block: Add bdrv_close_all() handlers List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Eric Blake , Paolo Bonzini , qemu-devel@nongnu.org Cc: Kevin Wolf , Markus Armbruster , Stefan Hajnoczi On 2015-01-28 at 17:05, Eric Blake wrote: > On 01/26/2015 02:22 PM, Paolo Bonzini wrote: >> >> On 26/01/2015 22:13, Max Reitz wrote: >>>> An eject blocker would also break backwards-compatibility though. What >>>> about an eject notifier? Would that concept make sense? >>> It does make sense (in that it is the way I would implement "just do >>> what we always did"), but I just don't like it for the fact that it >>> makes NBD a special snowflake. I can live with it, though. >> Yes, it's weird. But this is just the backwards-compatible solution. >> >> I'm okay with implementing only the new solution, but: >> >> - the old QMP (and HMP?) commands must be removed > Back-compat is a bear to figure out. From libvirt's perspective, it is > okay to require a newer libvirt in order to drive newer qemu (the new > libvirt knowing how to probe whether old or new commands exist, and use > the right one); but it is still awkward any time upgrading qemu without > libvirt causes things to break gratuitously. > >> - the new command probably must not reuse the same BB as the guest, and >> I am not sure that this is possible. > We've had that design goal in the back of our minds for some time > (sharing a single BDS among multiple devices) - but I don't think it has > actually happened yet, so if this is the first time we make it happen, > there may be lots of details to get right. But it makes the most sense > (exporting and NBD disk is a form of creating a _new_ BB - distinct from > a guest-visible device, but both uses are definitely backends; and > sharing the same BDS among both backends makes total sense, so that the > drive visible to the guest can change medium without invalidating the > NBD serving up the old contents). Well, I've looked up the discussion Markus, Kevin and me were having; our result was that some users may find it useful to have an own BB for an NBD server, while others may want to re-use an existing BB. The former would be requested by creating an NBD server on a node-name instead of giving a device name; if given a device name, NBD will reuse that BB, and will detach itself on eject. Somehow I lost track of that final detail (I blame Christmas and New Year), so that's indeed what we decided upon. I will implement it in v2 (an eject notifier for BBs, which is then used by NBD). For the sake of completeness: Currently, it is impossible to have multiple BBs per BDS, so we cannot yet implement having a separate BB for the NBD server. This issue is however unrelated to this series, so we should be fine for now. Max