From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59843) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c9waD-0006Wt-Fa for qemu-devel@nongnu.org; Thu, 24 Nov 2016 11:09:10 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c9waA-00053M-9N for qemu-devel@nongnu.org; Thu, 24 Nov 2016 11:09:09 -0500 Received: from mail-wm0-x241.google.com ([2a00:1450:400c:c09::241]:36517) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1c9wa8-000522-Vf for qemu-devel@nongnu.org; Thu, 24 Nov 2016 11:09:06 -0500 Received: by mail-wm0-x241.google.com with SMTP id m203so5339324wma.3 for ; Thu, 24 Nov 2016 08:09:04 -0800 (PST) Date: Thu, 24 Nov 2016 16:08:56 +0000 From: Stefan Hajnoczi Message-ID: <20161124160856.GB13535@stefanha-x1.localdomain> References: <4F9BDA10-1D17-4420-A332-9834E84BF0BC@veritas.com> <20161118115450.GB5371@redhat.com> <20161118133611.GC5371@redhat.com> <40265568-6388-e302-0bbf-a08a6746a686@redhat.com> <5DCF5E88-0BC6-443B-B557-9A72D32A4D49@veritas.com> <20161124111135.GC9117@stefanha-x1.localdomain> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="JP+T4n/bALQSJXh8" Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ketan Nilangekar Cc: Paolo Bonzini , ashish mittal , "Daniel P. Berrange" , Jeff Cody , qemu-devel , Kevin Wolf , Markus Armbruster , Fam Zheng , Ashish Mittal , Abhijit Dey , Buddhi Madhav , "Venkatesha M.G." , Nitin Jerath , Gaurav Bhandarkar , Abhishek Kane --JP+T4n/bALQSJXh8 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Nov 24, 2016 at 11:31:14AM +0000, Ketan Nilangekar wrote: >=20 >=20 > On 11/24/16, 4:41 PM, "Stefan Hajnoczi" wrote: >=20 > On Thu, Nov 24, 2016 at 05:44:37AM +0000, Ketan Nilangekar wrote: > > On 11/24/16, 4:07 AM, "Paolo Bonzini" wrote: > > >On 23/11/2016 23:09, ashish mittal wrote: > > >> On the topic of protocol security - > > >>=20 > > >> Would it be enough for the first patch to implement only > > >> authentication and not encryption? > > > > > >Yes, of course. However, as we introduce more and more QEMU-speci= fic > > >characteristics to a protocol that is already QEMU-specific (it do= esn't > > >do failover, etc.), I am still not sure of the actual benefit of u= sing > > >libqnio versus having an NBD server or FUSE driver. > > > > > >You have already mentioned performance, but the design has changed= so > > >much that I think one of the two things has to change: either fail= over > > >moves back to QEMU and there is no (closed source) translator runn= ing on > > >the node, or the translator needs to speak a well-known and > > >already-supported protocol. > >=20 > > IMO design has not changed. Implementation has changed significantl= y. I would propose that we keep resiliency/failover code out of QEMU driver= and implement it entirely in libqnio as planned in a subsequent revision. = The VxHS server does not need to understand/handle failover at all.=20 > >=20 > > Today libqnio gives us significantly better performance than any NB= D/FUSE implementation. We know because we have prototyped with both. Signif= icant improvements to libqnio are also in the pipeline which will use cross= memory attach calls to further boost performance. Ofcourse a big reason fo= r the performance is also the HyperScale storage backend but we believe thi= s method of IO tapping/redirecting can be leveraged by other solutions as w= ell. > =20 > By "cross memory attach" do you mean > process_vm_readv(2)/process_vm_writev(2)? > =20 > Ketan> Yes. > =20 > That puts us back to square one in terms of security. You have > (untrusted) QEMU + (untrusted) libqnio directly accessing the memory = of > another process on the same machine. That process is therefore also > untrusted and may only process data for one guest so that guests stay > isolated from each other. > =20 > Ketan> Understood but this will be no worse than the current network base= d communication between qnio and vxhs server. And although we have question= s around QEMU trust/vulnerability issues, we are looking to implement basic= authentication scheme between libqnio and vxhs server. This is incorrect. Cross memory attach is equivalent to ptrace(2) (i.e. debugger) access. It means process A reads/writes directly from/to process B memory. Both processes must have the same uid/gid. There is no trust boundary between them. Network communication does not require both processes to have the same uid/gid. If you want multiple QEMU processes talking to a single server there must be a trust boundary between client and server. The server can validate the input from the client and reject undesired operations. Hope this makes sense now. Two architectures that implement the QEMU trust model correctly are: 1. Cross memory attach: each QEMU process has a dedicated vxhs server process to prevent guests from attacking each other. This is where I said you might as well put the code inside QEMU since there is no isolation anyway. From what you've said it sounds like the vxhs server needs a host-wide view and is responsible for all guests running on the host, so I guess we have to rule out this architecture. 2. Network communication: one vxhs server process and multiple guests. Here you might as well use NBD or iSCSI because it already exists and the vxhs driver doesn't add any unique functionality over existing protocols. > There's an easier way to get even better performance: get rid of libq= nio > and the external process. Move the code from the external process in= to > QEMU to eliminate the process_vm_readv(2)/process_vm_writev(2) and > context switching. > =20 > Can you remind me why there needs to be an external process? > =20 > Ketan> Apart from virtualizing the available direct attached storage on = the compute, vxhs storage backend (the external process) provides features = such as storage QoS, resiliency, efficient use of direct attached storage, = automatic storage recovery points (snapshots) etc. Implementing this in QEM= U is not practical and not the purpose of proposing this driver. This sounds similar to what QEMU and Linux (file systems, LVM, RAID, etc) already do. It brings to mind a third architecture: 3. A Linux driver or file system. Then QEMU opens a raw block device. This is what the Ceph rbd block driver in Linux does. This architecture has a kernel-userspace boundary so vxhs does not have to trust QEMU. I suggest Architecture #2. You'll be able to deploy on existing systems because QEMU already supports NBD or iSCSI. Use the time you gain from switching to this architecture on benchmarking and optimizing NBD or iSCSI so performance is closer to your goal. Stefan --JP+T4n/bALQSJXh8 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQEcBAEBAgAGBQJYNxCYAAoJEJykq7OBq3PIgxoH/i3Haq/WqDQmc+DNKP4aC4/W KUkTTuS4dLnktwjzMcL9lJrDv1cTgaSs7ZbMOFR4Jr7WNCABjsf02ns2IqWJ7XCU J7gkWv4H9oZjScbLyBI3/0A87ymCS2v68QMiGxZOE2JFGFZlkgfOeokX938ZNpmz dnqpWlf4YNJZGpH+9FaFUCbJCRXPW040wuYMPvqW9ltLzumu+Hh+fCU9zXFJdGE8 SkMkkchdAd/m8DTbWwZRLHYFngmRtrjDIgRT9fgte1KUPfEcnLtMnctAi3YdIKPO M4+PLjk++1FHhWjMEcPMLIl4SsyLMq9s/kVorX3ek+VzuZ4+c6IpANX6C6ips48= =qUaD -----END PGP SIGNATURE----- --JP+T4n/bALQSJXh8--