All of lore.kernel.org
 help / color / mirror / Atom feed
* [Ksummit-discuss] [TECH TOPIC] Bus IPC
@ 2016-07-28 22:24 David Herrmann
  2016-07-28 22:57 ` Andy Lutomirski
                   ` (6 more replies)
  0 siblings, 7 replies; 13+ messages in thread
From: David Herrmann @ 2016-07-28 22:24 UTC (permalink / raw)
  To: ksummit-discuss

Tom Gundersen and I would like to propose a technical session on
in-kernel IPC systems. For roughly half a year now we have been
developing (with others) a capability-based [1] IPC system for linux,
called bus1 [2]. We would like to present bus1, start a discussion on
open problems, and talk about the possible path forward for an upstream
inclusion.

While bus1 emerged out of the kdbus project, it is a new, independent
project, designed from scratch. Its main goal is to implement an n-to-n
communication bus on linux. A lot of inspiration is taken from both
DBus, as well as the the most commonly used IPC systems of other OSs,
and related research projects (including Android Binder, OS-X/Hurd Mach
IPC, Solaris Doors, Microsoft Midori IPC, seL4, Sandstorm's Cap'n'Proto,
..).

The bus1 IPC system was designed to...

 o be a machine-local IPC system. It is a fast communication channel
   between local threads and processes, independent of the marshaling
   format used.

 o provide secure, reliable capability-based [1] communication. A
   message is always invoked on a capability, requiring the caller to
   own said capability, otherwise it cannot perform that operation.

 o efficiently support n-to-n communication. Every peer can communicate
   with every other peer (given the right capabilities), with minimal
   overhead for state-tracking.

 o be well-suited for both unicast and multicast messages.

 o guarantee a global message order [3], allowing clients to rely on
   causal ordering between messages they send and receive (for further
   reading, see Leslie Lamport's work on distributed systems [4]).

 o scale with the number of CPUs available. There is no global context
   specific to the bus1 IPC, but all communication happens based on
   local context only. That is, if two independent peers never talk to
   each other, their operations never share any memory (no shared
   locks, no shared state, etc.).

 o avoid any in-kernel buffering and rather transfer data directly
   from a sender into the receiver's mappable queue (single-copy).

A user-space implementation of bus1 (or even any bus-based IPC) was
considered, but was found to have several seemingly unavoidable issues.

 o To guarantee reliable, global message ordering including multicasts,
   as well as to provide reliable capabilities, a bus-broker is
   required. In other words, the current linux syscall API is not
   sufficient to implement the design as described above in an efficient
   way without a dedicated, trusted, privileged process that manages the
   bus and routes messages between the peers.

 o Whenever a bus-broker is involved, any message transaction between
   two clients requires the broker process to execute code in its own
   time-slice. While this time-slice can be distributed fairly across
   clients, it is ultimately always accounted on the user of the broker,
   rather than the originating user. Kernel time-slice accounting, and
   the accounting in the broker are completely separated and cannot make
   decisions based on the data of each other.
   Furthermore, the broker needs to be run with quite excessive resource
   limits and execution rights to be able to serve requests of high
   priority peers, making the same resources available to low priority
   peers as well.
   An in-kernel IPC mechanism removes the requirement for such a highly
   privileged bus-broker, and rather accounts any operation and resource
   exactly on the calling user, cgroup, and process.

 o Bus ipc often involves peers requesting services from other trusted
   peers, and waiting for a possible result before continuing. If
   said trust relationship is given, privileged processes actively want
   priority inheritance when calling into less privileged, but trusted
   processes. There is currently no known way to implement this in a
   user-space broker without requiring n^2 PI-futex pairs.

 o A userspace broker would entail two UDS transactions and potentially
   an extra context-switch, compared to a single bus1 transaction with
   the in-kernel broker. Our x86-benchmarks (before any serious
   optimization work has started) shows that two UDS transactions are
   always slower than one bus1 transaction. On top of that comes the
   extra context switch, which has about the same cost as a full bus1
   transaction, as well as any time spent in the broker itself. With an
   imaginary no-overhead broker, we found an in-kernel broker to be >40%
   faster. The numbers will differ between machines, but the reduced
   latency is undeniable.

 o Accounting of inflight resources (e.g., file-descriptors) in a broker
   is completely broken. Right now, any outgoing message of a broker
   will account FDs on the broker, however, there is no way for the
   broker to track outgoing FDs. As such, it cannot attribute them on
   the original sender of the FD, opening up for DoS attacks.

 o LSMs and audit cannot hook into the broker, nor get any additional
   routing information. Thus, audit cannot log proper information, and
   LSMs need to hook into a user-space process, relying on them to
   implement the wanted security model.

 o The kernel itself can never operate on the bus, nor provide services
   seamlessly to user-space (e.g., like netlink does), unless it is
   implemented in the kernel.

 o If a broker is involved, no communication can be ordered against
   side-channels. A kernel implementation, on the other hand, provides
   strong ordering against any other event happening on the system.

The implemention of bus1.ko with its <5k LOC is relatively small, but
still takes a considerable amount of time to review and understand. We
would like to use the kernel-summit as an opportunity to present bus1,
and answer questions on its design, implementation, and use of other
kernel subsystems. We encourage everyone to look into the sources, but
we still believe that a personal discussion up-front would save everyone
a lot of time and energy. Furthermore, it would also allow us to
collectively solve remaining issues.

Everyone interested in IPC is invited to the discussion. In particular,
we would welcome everyone who participated in the Binder and kdbus
discussions, is involed in shmem+memcg (or other bus1-related
subsystems), possibly including:

 o Andy Lutomirski
 o Greg Kroah-Hartman
 o Steven Rostedt
 o Eric W. Biederman
 o Jiri Kosina
 o Borislav Petkov
 o Michal Hocko (memcg)
 o Johannes Weiner (memcg)
 o Hugh Dickins (shmem)
 o Tom Gundersen (bus1)
 o David Herrmann (bus1)

Thanks!
    Tom, David

[1] https://en.wikipedia.org/wiki/Capability-based_security
[2] http://www.bus1.org
[3] https://github.com/bus1/bus1/wiki/Message-ordering
[4] http://amturing.acm.org/p558-lamport.pdf

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-28 22:24 [Ksummit-discuss] [TECH TOPIC] Bus IPC David Herrmann
@ 2016-07-28 22:57 ` Andy Lutomirski
  2016-07-28 23:42 ` Jiri Kosina
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Andy Lutomirski @ 2016-07-28 22:57 UTC (permalink / raw)
  To: David Herrmann; +Cc: ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 355 bytes --]

On Jul 28, 2016 3:24 PM, "David Herrmann" <dh.herrmann@gmail.com> wrote:
>

> Everyone interested in IPC is invited to the discussion. In particular,
> we would welcome everyone who participated in the Binder and kdbus
> discussions, is involed in shmem+memcg (or other bus1-related
> subsystems), possibly including:
>
>  o Andy Lutomirski

Count me in.

[-- Attachment #2: Type: text/html, Size: 547 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-28 22:24 [Ksummit-discuss] [TECH TOPIC] Bus IPC David Herrmann
  2016-07-28 22:57 ` Andy Lutomirski
@ 2016-07-28 23:42 ` Jiri Kosina
  2016-07-29  7:12   ` Hannes Reinecke
  2016-07-30 22:25   ` Tom Gundersen
  2016-07-29  2:41 ` Greg KH
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 13+ messages in thread
From: Jiri Kosina @ 2016-07-28 23:42 UTC (permalink / raw)
  To: David Herrmann; +Cc: ksummit-discuss

On Fri, 29 Jul 2016, David Herrmann wrote:

[ .. snip .. ]
> Everyone interested in IPC is invited to the discussion. In particular,
> we would welcome everyone who participated in the Binder and kdbus
> discussions, is involed in shmem+memcg (or other bus1-related
> subsystems), possibly including:
> 
>  o Andy Lutomirski
>  o Greg Kroah-Hartman
>  o Steven Rostedt
>  o Eric W. Biederman
>  o Jiri Kosina

I'd definitely be very interested in participating in such a discussion, 
should it happen at KS or whereever else.

Please put me to the "I still don't fully understand why teaching AF_UNIX 
about multicast shouldn't do the job" discussion group :)

Thanks David,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-28 22:24 [Ksummit-discuss] [TECH TOPIC] Bus IPC David Herrmann
  2016-07-28 22:57 ` Andy Lutomirski
  2016-07-28 23:42 ` Jiri Kosina
@ 2016-07-29  2:41 ` Greg KH
  2016-07-30  2:45 ` Steven Rostedt
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Greg KH @ 2016-07-29  2:41 UTC (permalink / raw)
  To: David Herrmann; +Cc: ksummit-discuss

On Fri, Jul 29, 2016 at 12:24:03AM +0200, David Herrmann wrote:
> Everyone interested in IPC is invited to the discussion. In particular,
> we would welcome everyone who participated in the Binder and kdbus
> discussions, is involed in shmem+memcg (or other bus1-related
> subsystems), possibly including:
> 
>  o Greg Kroah-Hartman

Count me in.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-28 23:42 ` Jiri Kosina
@ 2016-07-29  7:12   ` Hannes Reinecke
  2016-07-30 22:25   ` Tom Gundersen
  1 sibling, 0 replies; 13+ messages in thread
From: Hannes Reinecke @ 2016-07-29  7:12 UTC (permalink / raw)
  To: ksummit-discuss

On 07/29/2016 01:42 AM, Jiri Kosina wrote:
> On Fri, 29 Jul 2016, David Herrmann wrote:
> 
> [ .. snip .. ]
>> Everyone interested in IPC is invited to the discussion. In particular,
>> we would welcome everyone who participated in the Binder and kdbus
>> discussions, is involed in shmem+memcg (or other bus1-related
>> subsystems), possibly including:
>>
>>  o Andy Lutomirski
>>  o Greg Kroah-Hartman
>>  o Steven Rostedt
>>  o Eric W. Biederman
>>  o Jiri Kosina
> 
> I'd definitely be very interested in participating in such a discussion, 
> should it happen at KS or whereever else.
> 
> Please put me to the "I still don't fully understand why teaching AF_UNIX 
> about multicast shouldn't do the job" discussion group :)
> 
Oh.
_That_ topic would definitely interest me, too.
Not so much from the obvious kdbus/binder/<insert your favourite topic
here> type of thing, but because I'm still struggling to find a way of
passing information from one side of the kernel to the other.
notifier_chains simply don't fit the bill here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		               zSeries & Storage
hare@suse.com			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-28 22:24 [Ksummit-discuss] [TECH TOPIC] Bus IPC David Herrmann
                   ` (2 preceding siblings ...)
  2016-07-29  2:41 ` Greg KH
@ 2016-07-30  2:45 ` Steven Rostedt
  2016-07-30  9:24 ` Arnd Bergmann
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-30  2:45 UTC (permalink / raw)
  To: David Herrmann; +Cc: ksummit-discuss

On Fri, 29 Jul 2016 00:24:03 +0200
David Herrmann <dh.herrmann@gmail.com> wrote:


>  o Steven Rostedt

Count me in too. And thanks, at the surface, this looks like a much
improved implementation over kdbus.

-- Steve

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-28 22:24 [Ksummit-discuss] [TECH TOPIC] Bus IPC David Herrmann
                   ` (3 preceding siblings ...)
  2016-07-30  2:45 ` Steven Rostedt
@ 2016-07-30  9:24 ` Arnd Bergmann
  2016-07-30 21:58   ` Tom Gundersen
  2016-07-30 22:21 ` Josh Triplett
  2016-08-02  8:43 ` Jiri Kosina
  6 siblings, 1 reply; 13+ messages in thread
From: Arnd Bergmann @ 2016-07-30  9:24 UTC (permalink / raw)
  To: ksummit-discuss

On Friday, July 29, 2016 12:24:03 AM CEST David Herrmann wrote:
> Tom Gundersen and I would like to propose a technical session on
> in-kernel IPC systems. For roughly half a year now we have been
> developing (with others) a capability-based [1] IPC system for linux,
> called bus1 [2]. We would like to present bus1, start a discussion on
> open problems, and talk about the possible path forward for an upstream
> inclusion.
> 
> While bus1 emerged out of the kdbus project, it is a new, independent
> project, designed from scratch. Its main goal is to implement an n-to-n
> communication bus on linux. A lot of inspiration is taken from both
> DBus, as well as the the most commonly used IPC systems of other OSs,
> and related research projects (including Android Binder, OS-X/Hurd Mach
> IPC, Solaris Doors, Microsoft Midori IPC, seL4, Sandstorm's Cap'n'Proto,
> ..).
> 
> The bus1 IPC system was designed to...
> 
>  o be a machine-local IPC system. It is a fast communication channel
>    between local threads and processes, independent of the marshaling
>    format used.
> 
>  o provide secure, reliable capability-based [1] communication. A
>    message is always invoked on a capability, requiring the caller to
>    own said capability, otherwise it cannot perform that operation.
> 
>  o efficiently support n-to-n communication. Every peer can communicate
>    with every other peer (given the right capabilities), with minimal
>    overhead for state-tracking.
> 
>  o be well-suited for both unicast and multicast messages.
> 
>  o guarantee a global message order [3], allowing clients to rely on
>    causal ordering between messages they send and receive (for further
>    reading, see Leslie Lamport's work on distributed systems [4]).
> 
>  o scale with the number of CPUs available. There is no global context
>    specific to the bus1 IPC, but all communication happens based on
>    local context only. That is, if two independent peers never talk to
>    each other, their operations never share any memory (no shared
>    locks, no shared state, etc.).
> 
>  o avoid any in-kernel buffering and rather transfer data directly
>    from a sender into the receiver's mappable queue (single-copy).
> 
> A user-space implementation of bus1 (or even any bus-based IPC) was
> considered, but was found to have several seemingly unavoidable issues.
> 
>  o To guarantee reliable, global message ordering including multicasts,
>    as well as to provide reliable capabilities, a bus-broker is
>    required. In other words, the current linux syscall API is not
>    sufficient to implement the design as described above in an efficient
>    way without a dedicated, trusted, privileged process that manages the
>    bus and routes messages between the peers.
> 
>  o Whenever a bus-broker is involved, any message transaction between
>    two clients requires the broker process to execute code in its own
>    time-slice. While this time-slice can be distributed fairly across
>    clients, it is ultimately always accounted on the user of the broker,
>    rather than the originating user. Kernel time-slice accounting, and
>    the accounting in the broker are completely separated and cannot make
>    decisions based on the data of each other.
>    Furthermore, the broker needs to be run with quite excessive resource
>    limits and execution rights to be able to serve requests of high
>    priority peers, making the same resources available to low priority
>    peers as well.
>    An in-kernel IPC mechanism removes the requirement for such a highly
>    privileged bus-broker, and rather accounts any operation and resource
>    exactly on the calling user, cgroup, and process.
> 
>  o Bus ipc often involves peers requesting services from other trusted
>    peers, and waiting for a possible result before continuing. If
>    said trust relationship is given, privileged processes actively want
>    priority inheritance when calling into less privileged, but trusted
>    processes. There is currently no known way to implement this in a
>    user-space broker without requiring n^2 PI-futex pairs.
> 
>  o A userspace broker would entail two UDS transactions and potentially
>    an extra context-switch, compared to a single bus1 transaction with
>    the in-kernel broker. Our x86-benchmarks (before any serious
>    optimization work has started) shows that two UDS transactions are
>    always slower than one bus1 transaction. On top of that comes the
>    extra context switch, which has about the same cost as a full bus1
>    transaction, as well as any time spent in the broker itself. With an
>    imaginary no-overhead broker, we found an in-kernel broker to be >40%
>    faster. The numbers will differ between machines, but the reduced
>    latency is undeniable.
> 
>  o Accounting of inflight resources (e.g., file-descriptors) in a broker
>    is completely broken. Right now, any outgoing message of a broker
>    will account FDs on the broker, however, there is no way for the
>    broker to track outgoing FDs. As such, it cannot attribute them on
>    the original sender of the FD, opening up for DoS attacks.
> 
>  o LSMs and audit cannot hook into the broker, nor get any additional
>    routing information. Thus, audit cannot log proper information, and
>    LSMs need to hook into a user-space process, relying on them to
>    implement the wanted security model.
> 
>  o The kernel itself can never operate on the bus, nor provide services
>    seamlessly to user-space (e.g., like netlink does), unless it is
>    implemented in the kernel.
> 
>  o If a broker is involved, no communication can be ordered against
>    side-channels. A kernel implementation, on the other hand, provides
>    strong ordering against any other event happening on the system.
> 
> The implemention of bus1.ko with its <5k LOC is relatively small, but
> still takes a considerable amount of time to review and understand. We
> would like to use the kernel-summit as an opportunity to present bus1,
> and answer questions on its design, implementation, and use of other
> kernel subsystems. We encourage everyone to look into the sources, but
> we still believe that a personal discussion up-front would save everyone
> a lot of time and energy. Furthermore, it would also allow us to
> collectively solve remaining issues.
> 
> Everyone interested in IPC is invited to the discussion. In particular,
> we would welcome everyone who participated in the Binder and kdbus
> discussions, is involed in shmem+memcg (or other bus1-related
> subsystems), possibly including:
> 
>  o Andy Lutomirski
>  o Greg Kroah-Hartman
>  o Steven Rostedt
>  o Eric W. Biederman
>  o Jiri Kosina
>  o Borislav Petkov
>  o Michal Hocko (memcg)
>  o Johannes Weiner (memcg)
>  o Hugh Dickins (shmem)
>  o Tom Gundersen (bus1)
>  o David Herrmann (bus1)

I'd like to join in discussing the user interface. The current version
seems (compared to kdbus) simple enough that we could consider using
syscalls instead of a miscdev.

	Arnd

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-30  9:24 ` Arnd Bergmann
@ 2016-07-30 21:58   ` Tom Gundersen
  0 siblings, 0 replies; 13+ messages in thread
From: Tom Gundersen @ 2016-07-30 21:58 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: ksummit-discuss

Hi Arnd,

On Sat, Jul 30, 2016 at 11:24 AM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Friday, July 29, 2016 12:24:03 AM CEST David Herrmann wrote:
>> Tom Gundersen and I would like to propose a technical session on
>> in-kernel IPC systems. For roughly half a year now we have been
>> developing (with others) a capability-based [1] IPC system for linux,
>> called bus1 [2]. We would like to present bus1, start a discussion on
>> open problems, and talk about the possible path forward for an upstream
>> inclusion.

l...]

> I'd like to join in discussing the user interface. The current version
> seems (compared to kdbus) simple enough that we could consider using
> syscalls instead of a miscdev.

Yeah, using syscalls would probably make the most sense if and when we
submit it upstream. The only reason we didn't so far was that it has
been easier to develop it out-of-tree as a module.

Cheers,

Tom

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-28 22:24 [Ksummit-discuss] [TECH TOPIC] Bus IPC David Herrmann
                   ` (4 preceding siblings ...)
  2016-07-30  9:24 ` Arnd Bergmann
@ 2016-07-30 22:21 ` Josh Triplett
  2016-08-01 10:36   ` David Herrmann
  2016-08-02  8:43 ` Jiri Kosina
  6 siblings, 1 reply; 13+ messages in thread
From: Josh Triplett @ 2016-07-30 22:21 UTC (permalink / raw)
  To: David Herrmann; +Cc: ksummit-discuss

On Fri, Jul 29, 2016 at 12:24:03AM +0200, David Herrmann wrote:
> Tom Gundersen and I would like to propose a technical session on
> in-kernel IPC systems. For roughly half a year now we have been
> developing (with others) a capability-based [1] IPC system for linux,
> called bus1 [2]. We would like to present bus1, start a discussion on
> open problems, and talk about the possible path forward for an upstream
> inclusion.
> 
> While bus1 emerged out of the kdbus project, it is a new, independent
> project, designed from scratch.

I'd heard that the plan for bus1 was to provide DBus-compatible
semantics via a userspace compatibility layer.  Do you still plan to do
that, so that current users of DBus can run on bus1 without
modification, or will current users of DBus need to port to bus1?

- Josh Triplett

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-28 23:42 ` Jiri Kosina
  2016-07-29  7:12   ` Hannes Reinecke
@ 2016-07-30 22:25   ` Tom Gundersen
  1 sibling, 0 replies; 13+ messages in thread
From: Tom Gundersen @ 2016-07-30 22:25 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: ksummit-discuss

Hi Jiri,

On Fri, Jul 29, 2016 at 1:42 AM, Jiri Kosina <jikos@kernel.org> wrote:
> Please put me to the "I still don't fully understand why teaching AF_UNIX
> about multicast shouldn't do the job" discussion group :)

Would be happy to discuss that. Though the short version is that there
is very little overlap between AF_UNIX and what we would need from
bus1.

Cheers,

Tom

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-30 22:21 ` Josh Triplett
@ 2016-08-01 10:36   ` David Herrmann
  2016-08-01 18:53     ` Josh Triplett
  0 siblings, 1 reply; 13+ messages in thread
From: David Herrmann @ 2016-08-01 10:36 UTC (permalink / raw)
  To: Josh Triplett; +Cc: ksummit-discuss

Hi Josh!

On Sun, Jul 31, 2016 at 12:21 AM, Josh Triplett <josh@joshtriplett.org> wrote:
> On Fri, Jul 29, 2016 at 12:24:03AM +0200, David Herrmann wrote:
>> Tom Gundersen and I would like to propose a technical session on
>> in-kernel IPC systems. For roughly half a year now we have been
>> developing (with others) a capability-based [1] IPC system for linux,
>> called bus1 [2]. We would like to present bus1, start a discussion on
>> open problems, and talk about the possible path forward for an upstream
>> inclusion.
>>
>> While bus1 emerged out of the kdbus project, it is a new, independent
>> project, designed from scratch.
>
> I'd heard that the plan for bus1 was to provide DBus-compatible
> semantics via a userspace compatibility layer.  Do you still plan to do
> that, so that current users of DBus can run on bus1 without
> modification, or will current users of DBus need to port to bus1?

DBus has very limited requirements to the transport layer, so bus1
would work just fine. If you want to get rid of the broker, though,
you need to support some of the quirks of DBus. Most importantly,
dbus-daemon supports almost arbitrary broadcast-filters of any
transmitted broadcast message, as well as a very expressive policy
language to perform per-message filtering. We do not support either on
bus1, since we abstain from any global state. Hence, you cannot get
rid of the broker if you use bus1 as DBus transport.

Having said that, we still believe that DBus will not go away over
night, and we do want to improve it. By restricting the dbus-spec to
destination-based broadcast-filters, and employing a replacement
policy language similar to the one of kdbus, you can shortcut the
broker, though. All you would need for DBus-compatibility is a
bus-manager that connects peers according to the policy, but it would
no longer route messages. Furthermore, by shifting DBus to
signal-subscriptions rather than signal-matching, we do pave the way
for a future without such a bus-manager.
If there are clients incompatible to the mentioned restrictions to the
DBus spec (and I am unaware of any besides 'dconf', which we know
workarounds for), a bus-broker can always be employed as an add-on,
thus keeping full DBus compatibility.

Thanks
David

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-08-01 10:36   ` David Herrmann
@ 2016-08-01 18:53     ` Josh Triplett
  0 siblings, 0 replies; 13+ messages in thread
From: Josh Triplett @ 2016-08-01 18:53 UTC (permalink / raw)
  To: David Herrmann; +Cc: ksummit-discuss

On Mon, Aug 01, 2016 at 12:36:07PM +0200, David Herrmann wrote:
> Hi Josh!
> 
> On Sun, Jul 31, 2016 at 12:21 AM, Josh Triplett <josh@joshtriplett.org> wrote:
> > On Fri, Jul 29, 2016 at 12:24:03AM +0200, David Herrmann wrote:
> >> Tom Gundersen and I would like to propose a technical session on
> >> in-kernel IPC systems. For roughly half a year now we have been
> >> developing (with others) a capability-based [1] IPC system for linux,
> >> called bus1 [2]. We would like to present bus1, start a discussion on
> >> open problems, and talk about the possible path forward for an upstream
> >> inclusion.
> >>
> >> While bus1 emerged out of the kdbus project, it is a new, independent
> >> project, designed from scratch.
> >
> > I'd heard that the plan for bus1 was to provide DBus-compatible
> > semantics via a userspace compatibility layer.  Do you still plan to do
> > that, so that current users of DBus can run on bus1 without
> > modification, or will current users of DBus need to port to bus1?
> 
> DBus has very limited requirements to the transport layer, so bus1
> would work just fine. If you want to get rid of the broker, though,
> you need to support some of the quirks of DBus. Most importantly,
> dbus-daemon supports almost arbitrary broadcast-filters of any
> transmitted broadcast message, as well as a very expressive policy
> language to perform per-message filtering. We do not support either on
> bus1, since we abstain from any global state. Hence, you cannot get
> rid of the broker if you use bus1 as DBus transport.
> 
> Having said that, we still believe that DBus will not go away over
> night, and we do want to improve it. By restricting the dbus-spec to
> destination-based broadcast-filters, and employing a replacement
> policy language similar to the one of kdbus, you can shortcut the
> broker, though. All you would need for DBus-compatibility is a
> bus-manager that connects peers according to the policy, but it would
> no longer route messages. Furthermore, by shifting DBus to
> signal-subscriptions rather than signal-matching, we do pave the way
> for a future without such a bus-manager.
> If there are clients incompatible to the mentioned restrictions to the
> DBus spec (and I am unaware of any besides 'dconf', which we know
> workarounds for), a bus-broker can always be employed as an add-on,
> thus keeping full DBus compatibility.

A compatibility broker in userspace seems fine for compatibility; I just
wanted to learn more about the plan to handle that.

- Josh Triplett

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Ksummit-discuss] [TECH TOPIC] Bus IPC
  2016-07-28 22:24 [Ksummit-discuss] [TECH TOPIC] Bus IPC David Herrmann
                   ` (5 preceding siblings ...)
  2016-07-30 22:21 ` Josh Triplett
@ 2016-08-02  8:43 ` Jiri Kosina
  6 siblings, 0 replies; 13+ messages in thread
From: Jiri Kosina @ 2016-08-02  8:43 UTC (permalink / raw)
  To: David Herrmann; +Cc: ksummit-discuss

On Fri, 29 Jul 2016, David Herrmann wrote:

> While bus1 emerged out of the kdbus project, it is a new, independent
> project, designed from scratch. Its main goal is to implement an n-to-n
> communication bus on linux. A lot of inspiration is taken from both
> DBus, as well as the the most commonly used IPC systems of other OSs,
> and related research projects (including Android Binder, OS-X/Hurd Mach
> IPC, Solaris Doors, Microsoft Midori IPC, seL4, Sandstorm's Cap'n'Proto,
> ..).
[ ... ]
>  o Andy Lutomirski
>  o Greg Kroah-Hartman
>  o Steven Rostedt
>  o Eric W. Biederman
>  o Jiri Kosina

So I've given dbus1 a cursory look, and I fully agree that this would be 
an excellent topic; most importantly to make sure that maintainers who 
have been involved in previous kdbus discussions / reviews get an overview 
that this is indeed substantially different (and how exactly), and throw 
away any prejudice they might have gathered in the previous review rounds.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-08-02  8:43 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-28 22:24 [Ksummit-discuss] [TECH TOPIC] Bus IPC David Herrmann
2016-07-28 22:57 ` Andy Lutomirski
2016-07-28 23:42 ` Jiri Kosina
2016-07-29  7:12   ` Hannes Reinecke
2016-07-30 22:25   ` Tom Gundersen
2016-07-29  2:41 ` Greg KH
2016-07-30  2:45 ` Steven Rostedt
2016-07-30  9:24 ` Arnd Bergmann
2016-07-30 21:58   ` Tom Gundersen
2016-07-30 22:21 ` Josh Triplett
2016-08-01 10:36   ` David Herrmann
2016-08-01 18:53     ` Josh Triplett
2016-08-02  8:43 ` Jiri Kosina

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.