qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Laurent Vivier" <lvivier@redhat.com>,
	"Thomas Huth" <thuth@redhat.com>,
	"Juan Quintela" <quintela@redhat.com>,
	qemu-devel <qemu-devel@nongnu.org>,
	"Marc-André Lureau" <marcandre.lureau@gmail.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v2 0/2] Add dbus-vmstate
Date: Fri, 23 Aug 2019 16:14:48 +0100	[thread overview]
Message-ID: <20190823151448.GL2784@work-vm> (raw)
In-Reply-To: <20190823150508.GM9654@redhat.com>

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Fri, Aug 23, 2019 at 03:56:34PM +0100, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > On Fri, Aug 23, 2019 at 03:26:02PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > > > On Fri, Aug 23, 2019 at 03:09:48PM +0100, Dr. David Alan Gilbert wrote:
> > > > > > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> > > > > > > Hi
> > > > > > > 
> > > > > > > On Fri, Aug 23, 2019 at 5:00 PM Dr. David Alan Gilbert
> > > > > > > <dgilbert@redhat.com> wrote:
> > > > > > > >
> > > > > > > > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > > > > > >
> > > > > > > > <snip>
> > > > > > > >
> > > > > > > > > This means QEMU still has to iterate over every single client
> > > > > > > > > on the bus to identify them. If you're doing that, there's
> > > > > > > > > no point in owning a well known service at all. Just iterate
> > > > > > > > > over the unique bus names and look for the exported object
> > > > > > > > > path /org/qemu/VMState
> > > > > > > > >
> > > > > > > >
> > > > > > > > Not knowing anything about DBus security, I want to ask how do
> > > > > > > > we handle security here?
> > > > > > > 
> > > > > > > First of all, we are talking about cooperative processes, and having a
> > > > > > > specific bus for each qemu instance. So some amount of security/trust
> > > > > > > is already assumed.
> > > > > > 
> > > > > > Some but we need to keep it as limited as possible; for example two
> > > > > > reasons for having separate processes both come down to security:
> > > > > > 
> > > > > >   a) vtpm - however screwy the qemu is, you can never get to the keys in
> > > > > > the vtpm
> > > > > 
> > > > > Processes connected to dbus can only call the DBus APIs that vtpm
> > > > > actually exports.  The vtpm should simply *not* export a DBus
> > > > > API that allows anything to fetch the keys.
> > > > > 
> > > > > If it did want to export APIs for fetching keys, then we would
> > > > > have to ensure suitable dbus /selinux policy was created to
> > > > > prevent unwarranted access.
> > > > 
> > > > This was really just one example of where the security/trust isn't
> > > > assumed; however a more concrete case is migration of a vtpm, and even
> > > > though it's probably encrypted blob you still don't want some other
> > > > device to grab the migration data - or to say reinitialise the vtpm.
> > > 
> > > That can be dealt with by the dbus security policies, provided
> > > you either run the vtpm as a different user ID from the other
> > > untrustworthy helpers, or use a different selinux context for
> > > vtpm. You can then express that only the user that QEMU is
> > > running under can talk to vtpm over dbus.
> > 
> > The need for the extra user ID or selinux context is a pain;
> > but probably warranted for the vTPM;  in general though some of this
> > exists because of the choice of DBus and wouldn't be a problem for
> > something that had a point-to-point socket it sent everything over.
> 
> NB be careful to use s/DBus/DBus bus/
> 
> DBus the protocol is fine to be used in a point-to-point socket
> scenario - the use of the bus is strictly optional.
> 
> If all communication we expect is exclusively  Helper <-> QEMU,
> then I'd argue in favour of dbus in point-to-point mode.
> 
> The use cases Stefan brought up for virtiofsd though is what
> I think brings the idea of using the bus relevant. It is the
> desire to allow online control/mgmt of the helper, which
> introduces a 3rd party which isn't QEMU. Instead either libvirt
> or a standalone admin/debugging tool. With multiple parties
> involved I think the bus becomes relevant
> 
> With p2p mode you could have 2 dbus socket for Helper <-> QEMU
> and another dbus socket for Helper <-> libvirt/debugging, but
> this isn't an obvious security win over using the bus, as you
> now need different access rules for each of the p2p sockets
> to say who can connect to which socket. 

Right; point-2-point doesn't worry me much as long as we're careful;
it's now we're suddenly proposing something much more general that
I think we need to start being really careful.

> > > Where I think you could have problems is if you needed finer
> > > grainer control with selinux. eg if vstpm exports 2 different
> > > services, you can't allow access to one service, but forbid
> > > access to the other service.
> > > 
> > > > > >   b) virtio-gpu, loads of complex GPU code that can't break the main
> > > > > > qemu process.
> > > > > 
> > > > > That's no problem - virtio-gpu crashes, it disappears from the dbus
> > > > > bus, but everything else keeps running.
> > > > 
> > > > Crashing is the easy case; assume it's malicious and you don't want it
> > > > getting to say a storage device provided by another vhost-user device.
> > > 
> > > If we assume that the 2 processes can't commnuicate / access each
> > > other outside DBus, then the attack avenues added by use of dbus
> > > are most likely either:
> > > 
> > >  - invoking some DBus method that should not be allowed due
> > >    to incomplete dbus security policy. 
> > > 
> > >  - finding a crash in a dbus client library that you can somehow
> > >    exploit to get remote code execution in the separate process
> > > 
> > >    I won't claim this is impossible, but I think it helps to be
> > >    using a standard, widely used battle tested RPC impl, rather
> > >    than a home grown RPC protocol.
> > 
> > It's only the policy case I worry about; and my point here is if we
> > decide to use dbus then we have to think properly about security and
> > defined stuff.
> > 
> > > 
> > > 
> > > > > > > But if necessary, dbus can enforce policies on who is allowed to own a
> > > > > > > name, or to send/receive message from. As far as I know, this is
> > > > > > > mostly user/group policies.
> > > > > > > 
> > > > > > > But there is also SELinux checks to send_msg and acquire_svc (see
> > > > > > > dbus-daemon(1))
> > > > > > 
> > > > > > But how does something like SELinux interact with a private dbus 
> > > > > > rather than the system dbus?
> > > > > 
> > > > > There's already two dbus-daemon's on each host - the system one and
> > > > > the session one, and they get different selinux contexts,
> > > > > system_dbus_t and unconfined_dbus_t.
> > > > > 
> > > > > Since libvirt would be responsible for launching these private dbus
> > > > > daemons it would be easy to make it run  svirt_dbus_t for example.
> > > > > Actually it would be  svirt_dbus_t:s0:cNNN,cMMM to get uniqueness
> > > > > per VM.
> > > > > 
> > > > > Will of course require us to talk to the SELinux maintainers to
> > > > > get some sensible policy rules created.
> > > > 
> > > > This all relies on SELinux and running privileged qemu/vhost-user pairs;
> > > > needing to do that purely to enforce security seems wrong.
> > > 
> > > Compare to an alternative bus-less solution where each helper has
> > > a direct UNIX socket connection to QEMU.
> > > 
> > > If two helpers are running as the same user ID, then can still
> > > directly attack each other via things like ptrace or /proc/$PID/mem,
> > > unless you've used SELinux to isolate them, or run each as a distinct
> > > user ID.  If you do the latter, then we can still easily isolate
> > > them using dbus.
> > 
> > You can lock those down pretty easily though.
> 
> How were you thinking ?
> 
> If you're not using SELinux or separate user IDs, then AFAICT you've
> got a choice of using seccomp or containers.  seccomp is really hard
> to get a useful policy out of with QEMU, and using containers for
> each helper process adds a level of complexity worse than selinux
> or separate user IDs, so isn't an obvious win over using dbus.

You can just drop the CAP_SYS_PTRACE on the whole lot for that;
I thought there was something for /proc/.../mem as well.

Dave

> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


  reply	other threads:[~2019-08-23 15:17 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-08 15:03 [Qemu-devel] [PATCH v2 0/2] Add dbus-vmstate Marc-André Lureau
2019-08-08 15:03 ` [Qemu-devel] [PATCH v2 1/2] qemu-file: move qemu_{get, put}_counted_string() declarations Marc-André Lureau
2019-08-09 18:32   ` Dr. David Alan Gilbert
2019-08-08 15:03 ` [Qemu-devel] [PATCH v2 2/2] Add dbus-vmstate object Marc-André Lureau
2019-08-08 15:07   ` Marc-André Lureau
2019-08-22 10:55   ` Dr. David Alan Gilbert
2019-08-22 11:35     ` Marc-André Lureau
2019-08-22 11:41       ` Dr. David Alan Gilbert
2019-08-22 11:57         ` Marc-André Lureau
2019-08-22 12:19           ` Dr. David Alan Gilbert
2019-08-22 12:38             ` Marc-André Lureau
2019-08-22 12:51               ` Dr. David Alan Gilbert
2019-08-23 11:20 ` [Qemu-devel] [PATCH v2 0/2] Add dbus-vmstate Daniel P. Berrangé
2019-08-23 11:31   ` Marc-André Lureau
2019-08-23 11:41     ` Daniel P. Berrangé
2019-08-23 11:47       ` Marc-André Lureau
2019-08-23 13:00       ` Dr. David Alan Gilbert
2019-08-23 13:48         ` Marc-André Lureau
2019-08-23 14:09           ` Daniel P. Berrangé
2019-08-23 14:09           ` Dr. David Alan Gilbert
2019-08-23 14:20             ` Daniel P. Berrangé
2019-08-23 14:26               ` Dr. David Alan Gilbert
2019-08-23 14:40                 ` Daniel P. Berrangé
2019-08-23 14:56                   ` Dr. David Alan Gilbert
2019-08-23 15:05                     ` Daniel P. Berrangé
2019-08-23 15:14                       ` Dr. David Alan Gilbert [this message]
2019-08-23 15:21                         ` Daniel P. Berrangé
2019-08-23 15:24                           ` Dr. David Alan Gilbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190823151448.GL2784@work-vm \
    --to=dgilbert@redhat.com \
    --cc=berrange@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=marcandre.lureau@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).