All of lore.kernel.org
 help / color / mirror / Atom feed
* Initial MCTP design proposal
@ 2018-12-07  2:41 Jeremy Kerr
  2018-12-07  4:15 ` Naidoo, Nilan
                   ` (3 more replies)
  0 siblings, 4 replies; 30+ messages in thread
From: Jeremy Kerr @ 2018-12-07  2:41 UTC (permalink / raw)
  To: openbmc
  Cc: Supreeth Venkatesh, David Thompson, Emily Shaffer, Dong Wei,
	Naidoo, Nilan, Andrew Geissler

[-- Attachment #1: Type: text/plain, Size: 8915 bytes --]

Hi OpenBMCers!

In an earlier thread, I promised to sketch out a design for a MCTP
implementation in OpenBMC, and I've included it below.

This is roughly in the OpenBMC design document format (thanks for the
reminder Andrew), but I've sent it to the list for initial review before
proposing to gerrit - mainly because there were a lot of folks who
expressed interest on the list. I suggest we move to gerrit once we get
specific feedback coming in. Let me know if you have general comments
whenever you like though.

In parallel, I've been developing a prototype for the MCTP library
mentioned below, including a serial transport binding. I'll push to
github soon and post a link, once I have it in a
slightly-more-consumable form.

Cheers,


Jeremy

--------------------------------------------------------

# Host/BMC communication channel: MCTP & PLDM

Author: Jeremy Kerr <jk@ozlabs.org> <jk>

## Problem Description

Currently, we have a few different methods of communication between host
and BMC. This is primarily IPMI-based, but also includes a few
hardware-specific side-channels, like hiomap. On OpenPOWER hardware at
least, we've definitely started to hit some of the limitations of IPMI
(for example, we have need for >255 sensors), as well as the hardware
channels that IPMI typically uses.

This design aims to use the Management Component Transport Protocol
(MCTP) to provide a common transport layer over the multiple channels
that OpenBMC platforms provide. Then, on top of MCTP, we have the
opportunity to move to newer host/BMC messaging protocols to overcome
some of the limitations we've encountered with IPMI.

## Background and References

Separating the "transport" and "messaging protocol" parts of the current
stack allows us to design these parts separately. Currently, IPMI
defines both of these; we currently have BT and KCS (both defined as
part of the IPMI 2.0 standard) as the transports, and IPMI itself as the
messaging protocol.

Some efforts of improving the hardware transport mechanism of IPMI have
been attempted, but not in a cross-implementation manner so far. This
does not address some of the limitations of the IPMI data model.

MCTP defines a standard transport protocol, plus a number of separate
hardware bindings for the actual transport of MCTP packets. These are
defined by the DMTF's Platform Management Working group; standards are
available at:

  https://www.dmtf.org/standards/pmci

I have included a small diagram of how these standards may fit together
in an OpenBMC system. The DSP numbers there are references to DMTF
standards.

One of the key concepts here is that separation of transport protocol
from the hardware bindings; this means that an MCTP "stack" may be using
either a I2C, PCI, Serial or custom hardware channel, without the higher
layers of that stack needing to be aware of the hardware implementation.
These higher levels only need to be aware that they are communicating
with a certain entity, defined by an Entity ID (MCTP EID).

I've mainly focussed on the "transport" part of the design here. While
this does enable new messaging protocols (mainly PLDM), I haven't
covered that much; we will propose those details for a separate design
effort.

As part of the design, I have referred to MCTP "messages" and "packets";
this is intentional, to match the definitions in the MCTP standard. MCTP
messages are the higher-level data transferred between MCTP endpoints,
which packets are typically smaller, and are what is sent over the
hardware. Messages that are larger than the hardware MTU are split into
individual packets by the transmit implementation, and reassembled at
the receive implementation.

A final important point is that this design is for the host <--> BMC
channel *only*. Even if we do replace IPMI for the host interface, we
will certainly need an IPMI interface available for external system
management.

## Requirements

Any channel between host and BMC should:

 - Have a simple serialisation and deserialisation format, to enable
   implementations in host firmware, which have widely varying runtime
   capabilities

 - Allow different hardware channels, as we have a wide variety of
   target platforms for OpenBMC

 - Be usable over simple hardware implementations, but have a facility
   for higher bandwidth messaging on platforms that require it.

 - Ideally, integrate with newer messaging protocols

## Proposed Design

The MCTP core specification just provides the packetisation, routing and
addressing mechanisms. The actual transmit/receive of those packets is
up to the hardware binding of the MCTP transport.

For OpenBMC, we would introduce an MCTP daemon, which implements the
transport over a configurable hardware channel (eg., Serial UART, I2C or
PCI). This daemon is responsible for the packetisation and routing of
MCTP messages to and from host firmware.

I see two options for the "inbound" or "application" interface of the
MCTP daemon:

 - it could handle upper parts of the stack (eg PLDM) directly, through
   in-process handlers that register for certain MCTP message types; or

 - it could channel raw MCTP messages (reassembled from MCTP packets) to
   DBUS messages (similar to the current IPMI host daemons), where the
   upper layers receive and act on those DBUS events.

I have a preference for the former, but I would be interested to hear
from the IPMI folks about how the latter structure has worked in the
past.

The proposed implementation here is to produce an MCTP "library" which
provides the packetisation and routing functions, between:

 - an "upper" messaging transmit/receive interface, for tx/rx of a full
   message to a specific endpoint

 - a "lower" hardware binding for transmit/receive of individual
   packets, providing a method for the core to tx/rx each packet to
   hardware

The lower interface would be plugged in to one of a number of
hardware-specific binding implementations (most of which would be
included in the library source tree, but others can be plugged-in too)

The reason for a library is to allow the same MCTP implementation to be
used in both OpenBMC and host firmware; the library should be
bidirectional. To allow this, the library would be written in portable C
(structured in a way that can be compiled as "extern C" in C++
codebases), and be able to be configured to suit those runtime
environments (for example, POSIX IO may not be available on all
platforms; we should be able to compile the library to suit). The
licence for the library should also allow this re-use; I'd suggest a
dual Apache & GPL licence.

As for the hardware bindings, we would want to implement a serial
transport binding first, to allow easy prototyping in simulation. For
OpenPOWER, we'd want to implement a "raw LPC" binding for better
performance, and later PCIe for large transfers. I imagine that there is
a need for an I2C binding implementation for other hardware platforms
too.

Lastly, I don't want to exclude any currently-used interfaces by
implementing MCTP - this should be an optional component of OpenBMC, and
not require platforms to implement it.

## Alternatives Considered

There have been two main alternatives to this approach:

Continue using IPMI, but start making more use of OEM extensions to
suit the requirements of new platforms. However, given that the IPMI
standard is no longer under active development, we would likely end up
with a large amount of platform-specific customisations. This also does
not solve the hardware channel issues in a standard manner.

Redfish between host and BMC. This would mean that host firmware needs a
HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support for
Redfish schema. This is not feasible for all host firmware
implementations; certainly not for OpenPOWER. It's possible that we
could run a simplified Redfish stack - indeed, MCTP has a proposal for a
Redfish-over-MCTP protocol, which uses simplified serialisation and no
requirement on HTTP. However, this still introduces a large amount of
complexity in host firmware.

## Impacts

Development would be required to implement the MCTP transport, plus any
new users of the MCTP messaging (eg, a PLDM implementation). These would
somewhat duplicate the work we have in IPMI handlers.

We'd want to keep IPMI running in parallel, so the "upgrade" path should
be fairly straightforward.

Design and development needs to involve potential host firmware
implementations.

## Testing

For the core MCTP library, we are able to run tests there in complete
isolation (I have already been able to run a prototype MCTP stack
through the afl fuzzer) to ensure that the core transport protocol
works.

For MCTP hardware bindings, we would develop channel-specific tests that
would be run in CI on both host and BMC.

For the OpenBMC MCTP daemon implementation, testing models would depend
on the structure we adopt in the design section.

[-- Attachment #2: mctp.png --]
[-- Type: image/png, Size: 100890 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: Initial MCTP design proposal
  2018-12-07  2:41 Initial MCTP design proposal Jeremy Kerr
@ 2018-12-07  4:15 ` Naidoo, Nilan
  2018-12-07  5:06   ` Jeremy Kerr
  2018-12-07  5:13 ` Deepak Kodihalli
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 30+ messages in thread
From: Naidoo, Nilan @ 2018-12-07  4:15 UTC (permalink / raw)
  To: Jeremy Kerr, openbmc
  Cc: Supreeth Venkatesh, David Thompson, Emily Shaffer, Dong Wei,
	Andrew Geissler

Hi Jeremy,

Thanks for making a start on this. I generally agree with the architecture. I was thinking along similar lines that we would have a MCTP daemon that implements the features described in the  MCTP base specification, and would provide bindings to hardware interfaces on lower interface and bindings to various messaging services on its upper interface. I also like that idea that it is portable an can be easily  adapted to other runtime environments. 

As far as the features that need to be supported, we are initially interested in PCIe VDM and SMBus/I2C hardware bindings.  The BMC will also need to be the Bus owner for the interfaces. It is not clear to be yet if we would also need bridging for our initial use cases.  The message types that we need to support are NVMe-SI, PLDM and Vendor Defined - PCI (0x7E). We currently don't have a strong need for supporting the host interface. 

Thanks
Nilan 

-----Original Message-----
From: Jeremy Kerr [mailto:jk@ozlabs.org] 
Sent: Thursday, December 6, 2018 6:41 PM
To: openbmc <openbmc@lists.ozlabs.org>
Cc: Supreeth Venkatesh <Supreeth.Venkatesh@arm.com>; David Thompson <dthompson@mellanox.com>; Emily Shaffer <emilyshaffer@google.com>; Dong Wei <Dong.Wei@arm.com>; Naidoo, Nilan <nilan.naidoo@intel.com>; Andrew Geissler <geissonator@gmail.com>
Subject: Initial MCTP design proposal

Hi OpenBMCers!

In an earlier thread, I promised to sketch out a design for a MCTP implementation in OpenBMC, and I've included it below.

This is roughly in the OpenBMC design document format (thanks for the reminder Andrew), but I've sent it to the list for initial review before proposing to gerrit - mainly because there were a lot of folks who expressed interest on the list. I suggest we move to gerrit once we get specific feedback coming in. Let me know if you have general comments whenever you like though.

In parallel, I've been developing a prototype for the MCTP library mentioned below, including a serial transport binding. I'll push to github soon and post a link, once I have it in a slightly-more-consumable form.

Cheers,


Jeremy

--------------------------------------------------------

# Host/BMC communication channel: MCTP & PLDM

Author: Jeremy Kerr <jk@ozlabs.org> <jk>

## Problem Description

Currently, we have a few different methods of communication between host and BMC. This is primarily IPMI-based, but also includes a few hardware-specific side-channels, like hiomap. On OpenPOWER hardware at least, we've definitely started to hit some of the limitations of IPMI (for example, we have need for >255 sensors), as well as the hardware channels that IPMI typically uses.

This design aims to use the Management Component Transport Protocol
(MCTP) to provide a common transport layer over the multiple channels that OpenBMC platforms provide. Then, on top of MCTP, we have the opportunity to move to newer host/BMC messaging protocols to overcome some of the limitations we've encountered with IPMI.

## Background and References

Separating the "transport" and "messaging protocol" parts of the current stack allows us to design these parts separately. Currently, IPMI defines both of these; we currently have BT and KCS (both defined as part of the IPMI 2.0 standard) as the transports, and IPMI itself as the messaging protocol.

Some efforts of improving the hardware transport mechanism of IPMI have been attempted, but not in a cross-implementation manner so far. This does not address some of the limitations of the IPMI data model.

MCTP defines a standard transport protocol, plus a number of separate hardware bindings for the actual transport of MCTP packets. These are defined by the DMTF's Platform Management Working group; standards are available at:

  https://www.dmtf.org/standards/pmci

I have included a small diagram of how these standards may fit together in an OpenBMC system. The DSP numbers there are references to DMTF standards.

One of the key concepts here is that separation of transport protocol from the hardware bindings; this means that an MCTP "stack" may be using either a I2C, PCI, Serial or custom hardware channel, without the higher layers of that stack needing to be aware of the hardware implementation.
These higher levels only need to be aware that they are communicating with a certain entity, defined by an Entity ID (MCTP EID).

I've mainly focussed on the "transport" part of the design here. While this does enable new messaging protocols (mainly PLDM), I haven't covered that much; we will propose those details for a separate design effort.

As part of the design, I have referred to MCTP "messages" and "packets"; this is intentional, to match the definitions in the MCTP standard. MCTP messages are the higher-level data transferred between MCTP endpoints, which packets are typically smaller, and are what is sent over the hardware. Messages that are larger than the hardware MTU are split into individual packets by the transmit implementation, and reassembled at the receive implementation.

A final important point is that this design is for the host <--> BMC channel *only*. Even if we do replace IPMI for the host interface, we will certainly need an IPMI interface available for external system management.

## Requirements

Any channel between host and BMC should:

 - Have a simple serialisation and deserialisation format, to enable
   implementations in host firmware, which have widely varying runtime
   capabilities

 - Allow different hardware channels, as we have a wide variety of
   target platforms for OpenBMC

 - Be usable over simple hardware implementations, but have a facility
   for higher bandwidth messaging on platforms that require it.

 - Ideally, integrate with newer messaging protocols

## Proposed Design

The MCTP core specification just provides the packetisation, routing and addressing mechanisms. The actual transmit/receive of those packets is up to the hardware binding of the MCTP transport.

For OpenBMC, we would introduce an MCTP daemon, which implements the transport over a configurable hardware channel (eg., Serial UART, I2C or PCI). This daemon is responsible for the packetisation and routing of MCTP messages to and from host firmware.

I see two options for the "inbound" or "application" interface of the MCTP daemon:

 - it could handle upper parts of the stack (eg PLDM) directly, through
   in-process handlers that register for certain MCTP message types; or

 - it could channel raw MCTP messages (reassembled from MCTP packets) to
   DBUS messages (similar to the current IPMI host daemons), where the
   upper layers receive and act on those DBUS events.

I have a preference for the former, but I would be interested to hear from the IPMI folks about how the latter structure has worked in the past.

The proposed implementation here is to produce an MCTP "library" which provides the packetisation and routing functions, between:

 - an "upper" messaging transmit/receive interface, for tx/rx of a full
   message to a specific endpoint

 - a "lower" hardware binding for transmit/receive of individual
   packets, providing a method for the core to tx/rx each packet to
   hardware

The lower interface would be plugged in to one of a number of hardware-specific binding implementations (most of which would be included in the library source tree, but others can be plugged-in too)

The reason for a library is to allow the same MCTP implementation to be used in both OpenBMC and host firmware; the library should be bidirectional. To allow this, the library would be written in portable C (structured in a way that can be compiled as "extern C" in C++ codebases), and be able to be configured to suit those runtime environments (for example, POSIX IO may not be available on all platforms; we should be able to compile the library to suit). The licence for the library should also allow this re-use; I'd suggest a dual Apache & GPL licence.

As for the hardware bindings, we would want to implement a serial transport binding first, to allow easy prototyping in simulation. For OpenPOWER, we'd want to implement a "raw LPC" binding for better performance, and later PCIe for large transfers. I imagine that there is a need for an I2C binding implementation for other hardware platforms too.

Lastly, I don't want to exclude any currently-used interfaces by implementing MCTP - this should be an optional component of OpenBMC, and not require platforms to implement it.

## Alternatives Considered

There have been two main alternatives to this approach:

Continue using IPMI, but start making more use of OEM extensions to suit the requirements of new platforms. However, given that the IPMI standard is no longer under active development, we would likely end up with a large amount of platform-specific customisations. This also does not solve the hardware channel issues in a standard manner.

Redfish between host and BMC. This would mean that host firmware needs a HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support for Redfish schema. This is not feasible for all host firmware implementations; certainly not for OpenPOWER. It's possible that we could run a simplified Redfish stack - indeed, MCTP has a proposal for a Redfish-over-MCTP protocol, which uses simplified serialisation and no requirement on HTTP. However, this still introduces a large amount of complexity in host firmware.

## Impacts

Development would be required to implement the MCTP transport, plus any new users of the MCTP messaging (eg, a PLDM implementation). These would somewhat duplicate the work we have in IPMI handlers.

We'd want to keep IPMI running in parallel, so the "upgrade" path should be fairly straightforward.

Design and development needs to involve potential host firmware implementations.

## Testing

For the core MCTP library, we are able to run tests there in complete isolation (I have already been able to run a prototype MCTP stack through the afl fuzzer) to ensure that the core transport protocol works.

For MCTP hardware bindings, we would develop channel-specific tests that would be run in CI on both host and BMC.

For the OpenBMC MCTP daemon implementation, testing models would depend on the structure we adopt in the design section.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07  4:15 ` Naidoo, Nilan
@ 2018-12-07  5:06   ` Jeremy Kerr
  2018-12-07  5:40     ` Naidoo, Nilan
  0 siblings, 1 reply; 30+ messages in thread
From: Jeremy Kerr @ 2018-12-07  5:06 UTC (permalink / raw)
  To: Naidoo, Nilan, openbmc
  Cc: Supreeth Venkatesh, David Thompson, Emily Shaffer, Dong Wei,
	Andrew Geissler

Hi Nilan,

Thanks for taking a look.

> As far as the features that need to be supported, we are initially
> interested in PCIe VDM and SMBus/I2C hardware bindings.

OK, sure thing. Is this on ASPEED hardware?

>   The BMC will also need to be the Bus owner for the interfaces.
>  It is not clear to be yet if we would also need bridging for our
> initial use cases.

Yes, I think this matches our designs too; I don't see us needing
bridging at present.

>   The message types that we need to support are NVMe-SI, PLDM and
> Vendor Defined - PCI (0x7E). We currently don't have a strong need for
> supporting the host interface. 

Oh, interesting, I didn't expect folks to be using MCTP but not
including the host channel. There's no issue with that, so I'll update
the wording so it's clear that this can be accommodated.

Cheers,


Jeremy

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07  2:41 Initial MCTP design proposal Jeremy Kerr
  2018-12-07  4:15 ` Naidoo, Nilan
@ 2018-12-07  5:13 ` Deepak Kodihalli
  2018-12-07  7:41   ` Jeremy Kerr
  2018-12-07 17:09   ` Supreeth Venkatesh
  2018-12-07 16:38 ` Supreeth Venkatesh
  2019-02-07 15:51 ` Brad Bishop
  3 siblings, 2 replies; 30+ messages in thread
From: Deepak Kodihalli @ 2018-12-07  5:13 UTC (permalink / raw)
  To: Jeremy Kerr, openbmc
  Cc: Emily Shaffer, David Thompson, Dong Wei, Supreeth Venkatesh,
	Naidoo, Nilan

On 07/12/18 8:11 AM, Jeremy Kerr wrote:
> Hi OpenBMCers!
> 
> In an earlier thread, I promised to sketch out a design for a MCTP
> implementation in OpenBMC, and I've included it below.


Thanks Jeremy for sending this out. This looks good (have just one 
comment below).

Question for everyone : do you have plans to employ PLDM over MCTP?

We are interested in PLDM for various "inside the box" communications 
(at the moment for the Host <-> BMC communication). I'd like to propose 
a design for a PLDM stack on OpenBMC, and would send a design template 
for review on the mailing list in some amount of time (I've just started 
with some initial sketches). I'd like to also know if others have 
embarked on a similar activity, so that we can collaborate earlier and 
avoid duplicate work.

> This is roughly in the OpenBMC design document format (thanks for the
> reminder Andrew), but I've sent it to the list for initial review before
> proposing to gerrit - mainly because there were a lot of folks who
> expressed interest on the list. I suggest we move to gerrit once we get
> specific feedback coming in. Let me know if you have general comments
> whenever you like though.
> 
> In parallel, I've been developing a prototype for the MCTP library
> mentioned below, including a serial transport binding. I'll push to
> github soon and post a link, once I have it in a
> slightly-more-consumable form.
> 
> Cheers,
> 
> 
> Jeremy
> 
> --------------------------------------------------------
> 
> # Host/BMC communication channel: MCTP & PLDM
> 
> Author: Jeremy Kerr <jk@ozlabs.org> <jk>
> 
> ## Problem Description
> 
> Currently, we have a few different methods of communication between host
> and BMC. This is primarily IPMI-based, but also includes a few
> hardware-specific side-channels, like hiomap. On OpenPOWER hardware at
> least, we've definitely started to hit some of the limitations of IPMI
> (for example, we have need for >255 sensors), as well as the hardware
> channels that IPMI typically uses.
> 
> This design aims to use the Management Component Transport Protocol
> (MCTP) to provide a common transport layer over the multiple channels
> that OpenBMC platforms provide. Then, on top of MCTP, we have the
> opportunity to move to newer host/BMC messaging protocols to overcome
> some of the limitations we've encountered with IPMI.
> 
> ## Background and References
> 
> Separating the "transport" and "messaging protocol" parts of the current
> stack allows us to design these parts separately. Currently, IPMI
> defines both of these; we currently have BT and KCS (both defined as
> part of the IPMI 2.0 standard) as the transports, and IPMI itself as the
> messaging protocol.
> 
> Some efforts of improving the hardware transport mechanism of IPMI have
> been attempted, but not in a cross-implementation manner so far. This
> does not address some of the limitations of the IPMI data model.
> 
> MCTP defines a standard transport protocol, plus a number of separate
> hardware bindings for the actual transport of MCTP packets. These are
> defined by the DMTF's Platform Management Working group; standards are
> available at:
> 
>    https://www.dmtf.org/standards/pmci
> 
> I have included a small diagram of how these standards may fit together
> in an OpenBMC system. The DSP numbers there are references to DMTF
> standards.
> 
> One of the key concepts here is that separation of transport protocol
> from the hardware bindings; this means that an MCTP "stack" may be using
> either a I2C, PCI, Serial or custom hardware channel, without the higher
> layers of that stack needing to be aware of the hardware implementation.
> These higher levels only need to be aware that they are communicating
> with a certain entity, defined by an Entity ID (MCTP EID).
> 
> I've mainly focussed on the "transport" part of the design here. While
> this does enable new messaging protocols (mainly PLDM), I haven't
> covered that much; we will propose those details for a separate design
> effort.
> 
> As part of the design, I have referred to MCTP "messages" and "packets";
> this is intentional, to match the definitions in the MCTP standard. MCTP
> messages are the higher-level data transferred between MCTP endpoints,
> which packets are typically smaller, and are what is sent over the
> hardware. Messages that are larger than the hardware MTU are split into
> individual packets by the transmit implementation, and reassembled at
> the receive implementation.
> 
> A final important point is that this design is for the host <--> BMC
> channel *only*. Even if we do replace IPMI for the host interface, we
> will certainly need an IPMI interface available for external system
> management.
> 
> ## Requirements
> 
> Any channel between host and BMC should:
> 
>   - Have a simple serialisation and deserialisation format, to enable
>     implementations in host firmware, which have widely varying runtime
>     capabilities
> 
>   - Allow different hardware channels, as we have a wide variety of
>     target platforms for OpenBMC
> 
>   - Be usable over simple hardware implementations, but have a facility
>     for higher bandwidth messaging on platforms that require it.
> 
>   - Ideally, integrate with newer messaging protocols
> 
> ## Proposed Design
> 
> The MCTP core specification just provides the packetisation, routing and
> addressing mechanisms. The actual transmit/receive of those packets is
> up to the hardware binding of the MCTP transport.
> 
> For OpenBMC, we would introduce an MCTP daemon, which implements the
> transport over a configurable hardware channel (eg., Serial UART, I2C or
> PCI). This daemon is responsible for the packetisation and routing of
> MCTP messages to and from host firmware.
> 
> I see two options for the "inbound" or "application" interface of the
> MCTP daemon:
> 
>   - it could handle upper parts of the stack (eg PLDM) directly, through
>     in-process handlers that register for certain MCTP message types; or

We'd like to somehow ensure (at least via documentation) that the 
handlers don't block the MCTP daemon from processing incoming traffic. 
The handlers might anyway end up making IPC calls (via D-Bus) to other 
processes. The second approach below seems to alleviate this problem.

>   - it could channel raw MCTP messages (reassembled from MCTP packets) to
>     DBUS messages (similar to the current IPMI host daemons), where the
>     upper layers receive and act on those DBUS events.
> 
> I have a preference for the former, but I would be interested to hear
> from the IPMI folks about how the latter structure has worked in the
> past.
> 
> The proposed implementation here is to produce an MCTP "library" which
> provides the packetisation and routing functions, between:
> 
>   - an "upper" messaging transmit/receive interface, for tx/rx of a full
>     message to a specific endpoint
> 
>   - a "lower" hardware binding for transmit/receive of individual
>     packets, providing a method for the core to tx/rx each packet to
>     hardware
> 
> The lower interface would be plugged in to one of a number of
> hardware-specific binding implementations (most of which would be
> included in the library source tree, but others can be plugged-in too)
> 
> The reason for a library is to allow the same MCTP implementation to be
> used in both OpenBMC and host firmware; the library should be
> bidirectional. To allow this, the library would be written in portable C
> (structured in a way that can be compiled as "extern C" in C++
> codebases), and be able to be configured to suit those runtime
> environments (for example, POSIX IO may not be available on all
> platforms; we should be able to compile the library to suit). The
> licence for the library should also allow this re-use; I'd suggest a
> dual Apache & GPL licence.
> 
> As for the hardware bindings, we would want to implement a serial
> transport binding first, to allow easy prototyping in simulation. For
> OpenPOWER, we'd want to implement a "raw LPC" binding for better
> performance, and later PCIe for large transfers. I imagine that there is
> a need for an I2C binding implementation for other hardware platforms
> too.
> 
> Lastly, I don't want to exclude any currently-used interfaces by
> implementing MCTP - this should be an optional component of OpenBMC, and
> not require platforms to implement it.
> 
> ## Alternatives Considered
> 
> There have been two main alternatives to this approach:
> 
> Continue using IPMI, but start making more use of OEM extensions to
> suit the requirements of new platforms. However, given that the IPMI
> standard is no longer under active development, we would likely end up
> with a large amount of platform-specific customisations. This also does
> not solve the hardware channel issues in a standard manner.
> 
> Redfish between host and BMC. This would mean that host firmware needs a
> HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support for
> Redfish schema. This is not feasible for all host firmware
> implementations; certainly not for OpenPOWER. It's possible that we
> could run a simplified Redfish stack - indeed, MCTP has a proposal for a
> Redfish-over-MCTP protocol, which uses simplified serialisation and no
> requirement on HTTP. However, this still introduces a large amount of
> complexity in host firmware.
> 
> ## Impacts
> 
> Development would be required to implement the MCTP transport, plus any
> new users of the MCTP messaging (eg, a PLDM implementation). These would
> somewhat duplicate the work we have in IPMI handlers.
> 
> We'd want to keep IPMI running in parallel, so the "upgrade" path should
> be fairly straightforward.
> 
> Design and development needs to involve potential host firmware
> implementations.
> 
> ## Testing
> 
> For the core MCTP library, we are able to run tests there in complete
> isolation (I have already been able to run a prototype MCTP stack
> through the afl fuzzer) to ensure that the core transport protocol
> works.
> 
> For MCTP hardware bindings, we would develop channel-specific tests that
> would be run in CI on both host and BMC.
> 
> For the OpenBMC MCTP daemon implementation, testing models would depend
> on the structure we adopt in the design section.
> 

Regards,
Deepak

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: Initial MCTP design proposal
  2018-12-07  5:06   ` Jeremy Kerr
@ 2018-12-07  5:40     ` Naidoo, Nilan
  0 siblings, 0 replies; 30+ messages in thread
From: Naidoo, Nilan @ 2018-12-07  5:40 UTC (permalink / raw)
  To: Jeremy Kerr, openbmc
  Cc: Supreeth Venkatesh, David Thompson, Emily Shaffer, Dong Wei,
	Andrew Geissler

Hi Jeremy,

The ASPEED does have  hardware support for PCIe VDM via the MCTP Controller block. 

Nilan


-----Original Message-----
From: Jeremy Kerr [mailto:jk@ozlabs.org] 
Sent: Thursday, December 6, 2018 9:07 PM
To: Naidoo, Nilan <nilan.naidoo@intel.com>; openbmc <openbmc@lists.ozlabs.org>
Cc: Supreeth Venkatesh <Supreeth.Venkatesh@arm.com>; David Thompson <dthompson@mellanox.com>; Emily Shaffer <emilyshaffer@google.com>; Dong Wei <Dong.Wei@arm.com>; Andrew Geissler <geissonator@gmail.com>
Subject: Re: Initial MCTP design proposal

Hi Nilan,

Thanks for taking a look.

> As far as the features that need to be supported, we are initially 
> interested in PCIe VDM and SMBus/I2C hardware bindings.

OK, sure thing. Is this on ASPEED hardware?

>   The BMC will also need to be the Bus owner for the interfaces.
>  It is not clear to be yet if we would also need bridging for our 
> initial use cases.

Yes, I think this matches our designs too; I don't see us needing bridging at present.

>   The message types that we need to support are NVMe-SI, PLDM and 
> Vendor Defined - PCI (0x7E). We currently don't have a strong need for 
> supporting the host interface.

Oh, interesting, I didn't expect folks to be using MCTP but not including the host channel. There's no issue with that, so I'll update the wording so it's clear that this can be accommodated.

Cheers,


Jeremy


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07  5:13 ` Deepak Kodihalli
@ 2018-12-07  7:41   ` Jeremy Kerr
  2018-12-07 17:09   ` Supreeth Venkatesh
  1 sibling, 0 replies; 30+ messages in thread
From: Jeremy Kerr @ 2018-12-07  7:41 UTC (permalink / raw)
  To: Deepak Kodihalli, openbmc
  Cc: Emily Shaffer, David Thompson, Dong Wei, Supreeth Venkatesh,
	Naidoo, Nilan

Hi Deepak,

Thanks for the feedback! From your comment:

> > I see two options for the "inbound" or "application" interface of
> > the
> > MCTP daemon:
> > 
> >   - it could handle upper parts of the stack (eg PLDM) directly,
> > through
> >     in-process handlers that register for certain MCTP message
> > types; or
> 
> We'd like to somehow ensure (at least via documentation) that the 
> handlers don't block the MCTP daemon from processing incoming
> traffic. The handlers might anyway end up making IPC calls (via D-Bus) 
> to other processes. The second approach below seems to alleviate this
> problem.

There are two design elements there:

 - requiring that the handlers do not block; and

 - whether the handlers are implemented in-process, or remotely over IPC

and I think that while they're related, the non-blocking-behaviour
doesn't necessarily require either in-process or IPC. As long as we have
the right interfaces, we can make that decision separately.

IUf we use IPC, we'd still need to make sure that the IPC mechanism
doesn't block, so we'd have similar issues either way.

Cheers,


Jeremy

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07  2:41 Initial MCTP design proposal Jeremy Kerr
  2018-12-07  4:15 ` Naidoo, Nilan
  2018-12-07  5:13 ` Deepak Kodihalli
@ 2018-12-07 16:38 ` Supreeth Venkatesh
  2019-02-07 15:51 ` Brad Bishop
  3 siblings, 0 replies; 30+ messages in thread
From: Supreeth Venkatesh @ 2018-12-07 16:38 UTC (permalink / raw)
  To: Jeremy Kerr, openbmc
  Cc: David Thompson, Emily Shaffer, Dong Wei, Naidoo, Nilan, Andrew Geissler

On Fri, 2018-12-07 at 10:41 +0800, Jeremy Kerr wrote:
> Hi OpenBMCers!
> 
> In an earlier thread, I promised to sketch out a design for a MCTP
> implementation in OpenBMC, and I've included it below.
Thank you.

> 
> This is roughly in the OpenBMC design document format (thanks for the
> reminder Andrew), but I've sent it to the list for initial review
> before
> proposing to gerrit - mainly because there were a lot of folks who
> expressed interest on the list. I suggest we move to gerrit once we
> get
> specific feedback coming in. Let me know if you have general comments
> whenever you like though.
Sounds good. Here are a couple of general comments.

1. We would need DSP0237 v1.1.0 or later (Management Component
Transport Protocol (MCTP) SMBus/I2C Transport Binding Specification) as
well.
2. In the diagram, DSP0218 should be "PLDM for Redfish Device
Enablement" Specification.
3. DSP0218 is no longer WIP. PMCI WG approved Draft 10 to be published
by DMTF forum in the next two weeks.

Question: How should we separate this Stack with the existing IPMI
based stack so that both of them dont interfere with each other?
(probably a question for the future once the code gets more advanced.)

> 
> In parallel, I've been developing a prototype for the MCTP library
> mentioned below, including a serial transport binding. I'll push to
> github soon and post a link, once I have it in a
> slightly-more-consumable form.
it would be nice if you could just post the header file (Interface) to
gerrit so that it gets reviewed well before the actual implementation.
BTW, I need to find out how to subscribe for gerrit email notifications
.

Thank you very much for posting the design diagram.

> 
> Cheers,
> 
> 
> Jeremy
> 
> --------------------------------------------------------
> 
> # Host/BMC communication channel: MCTP & PLDM
> 
> Author: Jeremy Kerr <jk@ozlabs.org> <jk>
> 
> ## Problem Description
> 
> Currently, we have a few different methods of communication between
> host
> and BMC. This is primarily IPMI-based, but also includes a few
> hardware-specific side-channels, like hiomap. On OpenPOWER hardware
> at
> least, we've definitely started to hit some of the limitations of
> IPMI
> (for example, we have need for >255 sensors), as well as the hardware
> channels that IPMI typically uses.
> 
> This design aims to use the Management Component Transport Protocol
> (MCTP) to provide a common transport layer over the multiple channels
> that OpenBMC platforms provide. Then, on top of MCTP, we have the
> opportunity to move to newer host/BMC messaging protocols to overcome
> some of the limitations we've encountered with IPMI.
> 
> ## Background and References
> 
> Separating the "transport" and "messaging protocol" parts of the
> current
> stack allows us to design these parts separately. Currently, IPMI
> defines both of these; we currently have BT and KCS (both defined as
> part of the IPMI 2.0 standard) as the transports, and IPMI itself as
> the
> messaging protocol.
> 
> Some efforts of improving the hardware transport mechanism of IPMI
> have
> been attempted, but not in a cross-implementation manner so far. This
> does not address some of the limitations of the IPMI data model.
> 
> MCTP defines a standard transport protocol, plus a number of separate
> hardware bindings for the actual transport of MCTP packets. These are
> defined by the DMTF's Platform Management Working group; standards
> are
> available at:
> 
>   https://www.dmtf.org/standards/pmci
> 
> I have included a small diagram of how these standards may fit
> together
> in an OpenBMC system. The DSP numbers there are references to DMTF
> standards.
> 
> One of the key concepts here is that separation of transport protocol
> from the hardware bindings; this means that an MCTP "stack" may be
> using
> either a I2C, PCI, Serial or custom hardware channel, without the
> higher
> layers of that stack needing to be aware of the hardware
> implementation.
> These higher levels only need to be aware that they are communicating
> with a certain entity, defined by an Entity ID (MCTP EID).
> 
> I've mainly focussed on the "transport" part of the design here.
> While
> this does enable new messaging protocols (mainly PLDM), I haven't
> covered that much; we will propose those details for a separate
> design
> effort.
> 
> As part of the design, I have referred to MCTP "messages" and
> "packets";
> this is intentional, to match the definitions in the MCTP standard.
> MCTP
> messages are the higher-level data transferred between MCTP
> endpoints,
> which packets are typically smaller, and are what is sent over the
> hardware. Messages that are larger than the hardware MTU are split
> into
> individual packets by the transmit implementation, and reassembled at
> the receive implementation.
> 
> A final important point is that this design is for the host <--> BMC
> channel *only*. Even if we do replace IPMI for the host interface, we
> will certainly need an IPMI interface available for external system
> management.
> 
> ## Requirements
> 
> Any channel between host and BMC should:
> 
>  - Have a simple serialisation and deserialisation format, to enable
>    implementations in host firmware, which have widely varying
> runtime
>    capabilities
> 
>  - Allow different hardware channels, as we have a wide variety of
>    target platforms for OpenBMC
> 
>  - Be usable over simple hardware implementations, but have a
> facility
>    for higher bandwidth messaging on platforms that require it.
> 
>  - Ideally, integrate with newer messaging protocols
> 
> ## Proposed Design
> 
> The MCTP core specification just provides the packetisation, routing
> and
> addressing mechanisms. The actual transmit/receive of those packets
> is
> up to the hardware binding of the MCTP transport.
> 
> For OpenBMC, we would introduce an MCTP daemon, which implements the
> transport over a configurable hardware channel (eg., Serial UART, I2C
> or
> PCI). This daemon is responsible for the packetisation and routing of
> MCTP messages to and from host firmware.
> 
> I see two options for the "inbound" or "application" interface of the
> MCTP daemon:
> 
>  - it could handle upper parts of the stack (eg PLDM) directly,
> through
>    in-process handlers that register for certain MCTP message types;
> or
> 
>  - it could channel raw MCTP messages (reassembled from MCTP packets)
> to
>    DBUS messages (similar to the current IPMI host daemons), where
> the
>    upper layers receive and act on those DBUS events.
> 
> I have a preference for the former, but I would be interested to hear
> from the IPMI folks about how the latter structure has worked in the
> past.
> 
> The proposed implementation here is to produce an MCTP "library"
> which
> provides the packetisation and routing functions, between:
> 
>  - an "upper" messaging transmit/receive interface, for tx/rx of a
> full
>    message to a specific endpoint
> 
>  - a "lower" hardware binding for transmit/receive of individual
>    packets, providing a method for the core to tx/rx each packet to
>    hardware
> 
> The lower interface would be plugged in to one of a number of
> hardware-specific binding implementations (most of which would be
> included in the library source tree, but others can be plugged-in
> too)
> 
> The reason for a library is to allow the same MCTP implementation to
> be
> used in both OpenBMC and host firmware; the library should be
> bidirectional. To allow this, the library would be written in
> portable C
> (structured in a way that can be compiled as "extern C" in C++
> codebases), and be able to be configured to suit those runtime
> environments (for example, POSIX IO may not be available on all
> platforms; we should be able to compile the library to suit). The
> licence for the library should also allow this re-use; I'd suggest a
> dual Apache & GPL licence.
> 
> As for the hardware bindings, we would want to implement a serial
> transport binding first, to allow easy prototyping in simulation. For
> OpenPOWER, we'd want to implement a "raw LPC" binding for better
> performance, and later PCIe for large transfers. I imagine that there
> is
> a need for an I2C binding implementation for other hardware platforms
> too.
> 
> Lastly, I don't want to exclude any currently-used interfaces by
> implementing MCTP - this should be an optional component of OpenBMC,
> and
> not require platforms to implement it.
> 
> ## Alternatives Considered
> 
> There have been two main alternatives to this approach:
> 
> Continue using IPMI, but start making more use of OEM extensions to
> suit the requirements of new platforms. However, given that the IPMI
> standard is no longer under active development, we would likely end
> up
> with a large amount of platform-specific customisations. This also
> does
> not solve the hardware channel issues in a standard manner.
> 
> Redfish between host and BMC. This would mean that host firmware
> needs a
> HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support for
> Redfish schema. This is not feasible for all host firmware
> implementations; certainly not for OpenPOWER. It's possible that we
> could run a simplified Redfish stack - indeed, MCTP has a proposal
> for a
> Redfish-over-MCTP protocol, which uses simplified serialisation and
> no
> requirement on HTTP. However, this still introduces a large amount of
> complexity in host firmware.
> 
> ## Impacts
> 
> Development would be required to implement the MCTP transport, plus
> any
> new users of the MCTP messaging (eg, a PLDM implementation). These
> would
> somewhat duplicate the work we have in IPMI handlers.
> 
> We'd want to keep IPMI running in parallel, so the "upgrade" path
> should
> be fairly straightforward.
> 
> Design and development needs to involve potential host firmware
> implementations.
> 
> ## Testing
> 
> For the core MCTP library, we are able to run tests there in complete
> isolation (I have already been able to run a prototype MCTP stack
> through the afl fuzzer) to ensure that the core transport protocol
> works.
> 
> For MCTP hardware bindings, we would develop channel-specific tests
> that
> would be run in CI on both host and BMC.
> 
> For the OpenBMC MCTP daemon implementation, testing models would
> depend
> on the structure we adopt in the design section.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07  5:13 ` Deepak Kodihalli
  2018-12-07  7:41   ` Jeremy Kerr
@ 2018-12-07 17:09   ` Supreeth Venkatesh
  2018-12-07 18:53     ` Emily Shaffer
  2018-12-10  6:14     ` Deepak Kodihalli
  1 sibling, 2 replies; 30+ messages in thread
From: Supreeth Venkatesh @ 2018-12-07 17:09 UTC (permalink / raw)
  To: Deepak Kodihalli, Jeremy Kerr, openbmc
  Cc: Emily Shaffer, David Thompson, Dong Wei, Naidoo, Nilan

On Fri, 2018-12-07 at 10:43 +0530, Deepak Kodihalli wrote:
> On 07/12/18 8:11 AM, Jeremy Kerr wrote:
> > Hi OpenBMCers!
> > 
> > In an earlier thread, I promised to sketch out a design for a MCTP
> > implementation in OpenBMC, and I've included it below.
> 
> 
> Thanks Jeremy for sending this out. This looks good (have just one 
> comment below).
> 
> Question for everyone : do you have plans to employ PLDM over MCTP?
Yes Deepak. we do eventually.

> 
> We are interested in PLDM for various "inside the box"
> communications 
> (at the moment for the Host <-> BMC communication). I'd like to
> propose 
> a design for a PLDM stack on OpenBMC, and would send a design
> template 
> for review on the mailing list in some amount of time (I've just
> started 
> with some initial sketches). I'd like to also know if others have 
> embarked on a similar activity, so that we can collaborate earlier
> and 
> avoid duplicate work.
Yes. Interested to collaborate.
Which portion of PLDM are you working on, other than base?
Platform Monitoring and Control?
Firmware Update?
BIOS Control andConfiguration?
SMBIOS Transfer?
FRU Data?
Redfish Device Enablement?

We are currently interested in Platform Monitoring and Control.


> 
> > This is roughly in the OpenBMC design document format (thanks for
> > the
> > reminder Andrew), but I've sent it to the list for initial review
> > before
> > proposing to gerrit - mainly because there were a lot of folks who
> > expressed interest on the list. I suggest we move to gerrit once we
> > get
> > specific feedback coming in. Let me know if you have general
> > comments
> > whenever you like though.
> > 
> > In parallel, I've been developing a prototype for the MCTP library
> > mentioned below, including a serial transport binding. I'll push to
> > github soon and post a link, once I have it in a
> > slightly-more-consumable form.
> > 
> > Cheers,
> > 
> > 
> > Jeremy
> > 
> > --------------------------------------------------------
> > 
> > # Host/BMC communication channel: MCTP & PLDM
> > 
> > Author: Jeremy Kerr <jk@ozlabs.org> <jk>
> > 
> > ## Problem Description
> > 
> > Currently, we have a few different methods of communication between
> > host
> > and BMC. This is primarily IPMI-based, but also includes a few
> > hardware-specific side-channels, like hiomap. On OpenPOWER hardware
> > at
> > least, we've definitely started to hit some of the limitations of
> > IPMI
> > (for example, we have need for >255 sensors), as well as the
> > hardware
> > channels that IPMI typically uses.
> > 
> > This design aims to use the Management Component Transport Protocol
> > (MCTP) to provide a common transport layer over the multiple
> > channels
> > that OpenBMC platforms provide. Then, on top of MCTP, we have the
> > opportunity to move to newer host/BMC messaging protocols to
> > overcome
> > some of the limitations we've encountered with IPMI.
> > 
> > ## Background and References
> > 
> > Separating the "transport" and "messaging protocol" parts of the
> > current
> > stack allows us to design these parts separately. Currently, IPMI
> > defines both of these; we currently have BT and KCS (both defined
> > as
> > part of the IPMI 2.0 standard) as the transports, and IPMI itself
> > as the
> > messaging protocol.
> > 
> > Some efforts of improving the hardware transport mechanism of IPMI
> > have
> > been attempted, but not in a cross-implementation manner so far.
> > This
> > does not address some of the limitations of the IPMI data model.
> > 
> > MCTP defines a standard transport protocol, plus a number of
> > separate
> > hardware bindings for the actual transport of MCTP packets. These
> > are
> > defined by the DMTF's Platform Management Working group; standards
> > are
> > available at:
> > 
> >    https://www.dmtf.org/standards/pmci
> > 
> > I have included a small diagram of how these standards may fit
> > together
> > in an OpenBMC system. The DSP numbers there are references to DMTF
> > standards.
> > 
> > One of the key concepts here is that separation of transport
> > protocol
> > from the hardware bindings; this means that an MCTP "stack" may be
> > using
> > either a I2C, PCI, Serial or custom hardware channel, without the
> > higher
> > layers of that stack needing to be aware of the hardware
> > implementation.
> > These higher levels only need to be aware that they are
> > communicating
> > with a certain entity, defined by an Entity ID (MCTP EID).
> > 
> > I've mainly focussed on the "transport" part of the design here.
> > While
> > this does enable new messaging protocols (mainly PLDM), I haven't
> > covered that much; we will propose those details for a separate
> > design
> > effort.
> > 
> > As part of the design, I have referred to MCTP "messages" and
> > "packets";
> > this is intentional, to match the definitions in the MCTP standard.
> > MCTP
> > messages are the higher-level data transferred between MCTP
> > endpoints,
> > which packets are typically smaller, and are what is sent over the
> > hardware. Messages that are larger than the hardware MTU are split
> > into
> > individual packets by the transmit implementation, and reassembled
> > at
> > the receive implementation.
> > 
> > A final important point is that this design is for the host <-->
> > BMC
> > channel *only*. Even if we do replace IPMI for the host interface,
> > we
> > will certainly need an IPMI interface available for external system
> > management.
> > 
> > ## Requirements
> > 
> > Any channel between host and BMC should:
> > 
> >   - Have a simple serialisation and deserialisation format, to
> > enable
> >     implementations in host firmware, which have widely varying
> > runtime
> >     capabilities
> > 
> >   - Allow different hardware channels, as we have a wide variety of
> >     target platforms for OpenBMC
> > 
> >   - Be usable over simple hardware implementations, but have a
> > facility
> >     for higher bandwidth messaging on platforms that require it.
> > 
> >   - Ideally, integrate with newer messaging protocols
> > 
> > ## Proposed Design
> > 
> > The MCTP core specification just provides the packetisation,
> > routing and
> > addressing mechanisms. The actual transmit/receive of those packets
> > is
> > up to the hardware binding of the MCTP transport.
> > 
> > For OpenBMC, we would introduce an MCTP daemon, which implements
> > the
> > transport over a configurable hardware channel (eg., Serial UART,
> > I2C or
> > PCI). This daemon is responsible for the packetisation and routing
> > of
> > MCTP messages to and from host firmware.
> > 
> > I see two options for the "inbound" or "application" interface of
> > the
> > MCTP daemon:
> > 
> >   - it could handle upper parts of the stack (eg PLDM) directly,
> > through
> >     in-process handlers that register for certain MCTP message
> > types; or
> 
> We'd like to somehow ensure (at least via documentation) that the 
> handlers don't block the MCTP daemon from processing incoming
> traffic. 
> The handlers might anyway end up making IPC calls (via D-Bus) to
> other 
> processes. The second approach below seems to alleviate this problem.
> 
> >   - it could channel raw MCTP messages (reassembled from MCTP
> > packets) to
> >     DBUS messages (similar to the current IPMI host daemons), where
> > the
> >     upper layers receive and act on those DBUS events.
> > 
> > I have a preference for the former, but I would be interested to
> > hear
> > from the IPMI folks about how the latter structure has worked in
> > the
> > past.
> > 
> > The proposed implementation here is to produce an MCTP "library"
> > which
> > provides the packetisation and routing functions, between:
> > 
> >   - an "upper" messaging transmit/receive interface, for tx/rx of a
> > full
> >     message to a specific endpoint
> > 
> >   - a "lower" hardware binding for transmit/receive of individual
> >     packets, providing a method for the core to tx/rx each packet
> > to
> >     hardware
> > 
> > The lower interface would be plugged in to one of a number of
> > hardware-specific binding implementations (most of which would be
> > included in the library source tree, but others can be plugged-in
> > too)
> > 
> > The reason for a library is to allow the same MCTP implementation
> > to be
> > used in both OpenBMC and host firmware; the library should be
> > bidirectional. To allow this, the library would be written in
> > portable C
> > (structured in a way that can be compiled as "extern C" in C++
> > codebases), and be able to be configured to suit those runtime
> > environments (for example, POSIX IO may not be available on all
> > platforms; we should be able to compile the library to suit). The
> > licence for the library should also allow this re-use; I'd suggest
> > a
> > dual Apache & GPL licence.
> > 
> > As for the hardware bindings, we would want to implement a serial
> > transport binding first, to allow easy prototyping in simulation.
> > For
> > OpenPOWER, we'd want to implement a "raw LPC" binding for better
> > performance, and later PCIe for large transfers. I imagine that
> > there is
> > a need for an I2C binding implementation for other hardware
> > platforms
> > too.
> > 
> > Lastly, I don't want to exclude any currently-used interfaces by
> > implementing MCTP - this should be an optional component of
> > OpenBMC, and
> > not require platforms to implement it.
> > 
> > ## Alternatives Considered
> > 
> > There have been two main alternatives to this approach:
> > 
> > Continue using IPMI, but start making more use of OEM extensions to
> > suit the requirements of new platforms. However, given that the
> > IPMI
> > standard is no longer under active development, we would likely end
> > up
> > with a large amount of platform-specific customisations. This also
> > does
> > not solve the hardware channel issues in a standard manner.
> > 
> > Redfish between host and BMC. This would mean that host firmware
> > needs a
> > HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support for
> > Redfish schema. This is not feasible for all host firmware
> > implementations; certainly not for OpenPOWER. It's possible that we
> > could run a simplified Redfish stack - indeed, MCTP has a proposal
> > for a
> > Redfish-over-MCTP protocol, which uses simplified serialisation and
> > no
> > requirement on HTTP. However, this still introduces a large amount
> > of
> > complexity in host firmware.
> > 
> > ## Impacts
> > 
> > Development would be required to implement the MCTP transport, plus
> > any
> > new users of the MCTP messaging (eg, a PLDM implementation). These
> > would
> > somewhat duplicate the work we have in IPMI handlers.
> > 
> > We'd want to keep IPMI running in parallel, so the "upgrade" path
> > should
> > be fairly straightforward.
> > 
> > Design and development needs to involve potential host firmware
> > implementations.
> > 
> > ## Testing
> > 
> > For the core MCTP library, we are able to run tests there in
> > complete
> > isolation (I have already been able to run a prototype MCTP stack
> > through the afl fuzzer) to ensure that the core transport protocol
> > works.
> > 
> > For MCTP hardware bindings, we would develop channel-specific tests
> > that
> > would be run in CI on both host and BMC.
> > 
> > For the OpenBMC MCTP daemon implementation, testing models would
> > depend
> > on the structure we adopt in the design section.
> > 
> 
> Regards,
> Deepak
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07 17:09   ` Supreeth Venkatesh
@ 2018-12-07 18:53     ` Emily Shaffer
  2018-12-07 20:06       ` Supreeth Venkatesh
                         ` (2 more replies)
  2018-12-10  6:14     ` Deepak Kodihalli
  1 sibling, 3 replies; 30+ messages in thread
From: Emily Shaffer @ 2018-12-07 18:53 UTC (permalink / raw)
  To: Supreeth Venkatesh
  Cc: Deepak Kodihalli, Jeremy Kerr, openbmc, David Thompson, Dong Wei,
	Naidoo, Nilan

[-- Attachment #1: Type: text/plain, Size: 13055 bytes --]

Hi Jeremy,

I had a couple comments on the initial draft.



On Fri, Dec 7, 2018 at 9:09 AM Supreeth Venkatesh <
supreeth.venkatesh@arm.com> wrote:

> On Fri, 2018-12-07 at 10:43 +0530, Deepak Kodihalli wrote:
> > On 07/12/18 8:11 AM, Jeremy Kerr wrote:
> > > Hi OpenBMCers!
> > >
> > > In an earlier thread, I promised to sketch out a design for a MCTP
> > > implementation in OpenBMC, and I've included it below.
> >
> >
> > Thanks Jeremy for sending this out. This looks good (have just one
> > comment below).
> >
> > Question for everyone : do you have plans to employ PLDM over MCTP?
> Yes Deepak. we do eventually.
>
> >
> > We are interested in PLDM for various "inside the box"
> > communications
> > (at the moment for the Host <-> BMC communication). I'd like to
> > propose
> > a design for a PLDM stack on OpenBMC, and would send a design
> > template
> > for review on the mailing list in some amount of time (I've just
> > started
> > with some initial sketches). I'd like to also know if others have
> > embarked on a similar activity, so that we can collaborate earlier
> > and
> > avoid duplicate work.
> Yes. Interested to collaborate.
> Which portion of PLDM are you working on, other than base?
> Platform Monitoring and Control?
> Firmware Update?
> BIOS Control andConfiguration?
> SMBIOS Transfer?
> FRU Data?
> Redfish Device Enablement?
>
> We are currently interested in Platform Monitoring and Control.
>
>
> >
> > > This is roughly in the OpenBMC design document format (thanks for
> > > the
> > > reminder Andrew), but I've sent it to the list for initial review
> > > before
> > > proposing to gerrit - mainly because there were a lot of folks who
> > > expressed interest on the list. I suggest we move to gerrit once we
> > > get
> > > specific feedback coming in. Let me know if you have general
> > > comments
> > > whenever you like though.
> > >
> > > In parallel, I've been developing a prototype for the MCTP library
> > > mentioned below, including a serial transport binding. I'll push to
> > > github soon and post a link, once I have it in a
> > > slightly-more-consumable form.
> > >
> > > Cheers,
> > >
> > >
> > > Jeremy
> > >
> > > --------------------------------------------------------
> > >
> > > # Host/BMC communication channel: MCTP & PLDM
> > >
> > > Author: Jeremy Kerr <jk@ozlabs.org> <jk>
> > >
> > > ## Problem Description
> > >
> > > Currently, we have a few different methods of communication between
> > > host
> > > and BMC. This is primarily IPMI-based, but also includes a few
> > > hardware-specific side-channels, like hiomap. On OpenPOWER hardware
> > > at
> > > least, we've definitely started to hit some of the limitations of
> > > IPMI
> > > (for example, we have need for >255 sensors), as well as the
> > > hardware
> > > channels that IPMI typically uses.
> > >
> > > This design aims to use the Management Component Transport Protocol
> > > (MCTP) to provide a common transport layer over the multiple
> > > channels
> > > that OpenBMC platforms provide. Then, on top of MCTP, we have the
> > > opportunity to move to newer host/BMC messaging protocols to
> > > overcome
> > > some of the limitations we've encountered with IPMI.
> > >
> > > ## Background and References
> > >
> > > Separating the "transport" and "messaging protocol" parts of the
> > > current
> > > stack allows us to design these parts separately. Currently, IPMI
> > > defines both of these; we currently have BT and KCS (both defined
> > > as
> > > part of the IPMI 2.0 standard) as the transports, and IPMI itself
> > > as the
> > > messaging protocol.
> > >
> > > Some efforts of improving the hardware transport mechanism of IPMI
> > > have
> > > been attempted, but not in a cross-implementation manner so far.
> > > This
> > > does not address some of the limitations of the IPMI data model.
> > >
> > > MCTP defines a standard transport protocol, plus a number of
> > > separate
> > > hardware bindings for the actual transport of MCTP packets. These
> > > are
> > > defined by the DMTF's Platform Management Working group; standards
> > > are
> > > available at:
> > >
> > >    https://www.dmtf.org/standards/pmci
> > >
> > > I have included a small diagram of how these standards may fit
> > > together
> > > in an OpenBMC system. The DSP numbers there are references to DMTF
> > > standards.
> > >
> > > One of the key concepts here is that separation of transport
> > > protocol
> > > from the hardware bindings; this means that an MCTP "stack" may be
> > > using
> > > either a I2C, PCI, Serial or custom hardware channel, without the
> > > higher
> > > layers of that stack needing to be aware of the hardware
> > > implementation.
> > > These higher levels only need to be aware that they are
> > > communicating
> > > with a certain entity, defined by an Entity ID (MCTP EID).
> > >
> > > I've mainly focussed on the "transport" part of the design here.
> > > While
> > > this does enable new messaging protocols (mainly PLDM), I haven't
> > > covered that much; we will propose those details for a separate
> > > design
> > > effort.
> > >
> > > As part of the design, I have referred to MCTP "messages" and
> > > "packets";
> > > this is intentional, to match the definitions in the MCTP standard.
> > > MCTP
> > > messages are the higher-level data transferred between MCTP
> > > endpoints,
> > > which packets are typically smaller, and are what is sent over the
> > > hardware. Messages that are larger than the hardware MTU are split
> > > into
> > > individual packets by the transmit implementation, and reassembled
> > > at
> > > the receive implementation.
> > >
> > > A final important point is that this design is for the host <-->
> > > BMC
> > > channel *only*. Even if we do replace IPMI for the host interface,
> > > we
> > > will certainly need an IPMI interface available for external system
> > > management.
>

I'm not sure it's correct to demand external IPMI. Most of OpenBMC
community (ourselves excluded :( ) is turning towards Redfish for this
role.  The external-facing IPMI specification is insecure at the standard
level, so I don't think that we should be encouraging or requiring it
anywhere, even in an unrelated doc.
tl;dr I think you should be more generic here and not specify IPMI for
external mgmt


> > >
> > > ## Requirements
> > >
> > > Any channel between host and BMC should:
> > >
> > >   - Have a simple serialisation and deserialisation format, to
> > > enable
> > >     implementations in host firmware, which have widely varying
> > > runtime
> > >     capabilities
> > >
> > >   - Allow different hardware channels, as we have a wide variety of
> > >     target platforms for OpenBMC
> > >
> > >   - Be usable over simple hardware implementations, but have a
> > > facility
> > >     for higher bandwidth messaging on platforms that require it.
> > >
> > >   - Ideally, integrate with newer messaging protocols
> > >
> > > ## Proposed Design
> > >
> > > The MCTP core specification just provides the packetisation,
> > > routing and
> > > addressing mechanisms. The actual transmit/receive of those packets
> > > is
> > > up to the hardware binding of the MCTP transport.
> > >
> > > For OpenBMC, we would introduce an MCTP daemon, which implements
> > > the
> > > transport over a configurable hardware channel (eg., Serial UART,
> > > I2C or
> > > PCI). This daemon is responsible for the packetisation and routing
> > > of
> > > MCTP messages to and from host firmware.
> > >
> > > I see two options for the "inbound" or "application" interface of
> > > the
> > > MCTP daemon:
> > >
> > >   - it could handle upper parts of the stack (eg PLDM) directly,
> > > through
> > >     in-process handlers that register for certain MCTP message
> > > types; or
> >
> > We'd like to somehow ensure (at least via documentation) that the
> > handlers don't block the MCTP daemon from processing incoming
> > traffic.
> > The handlers might anyway end up making IPC calls (via D-Bus) to
> > other
> > processes. The second approach below seems to alleviate this problem.
> >
> > >   - it could channel raw MCTP messages (reassembled from MCTP
> > > packets) to
> > >     DBUS messages (similar to the current IPMI host daemons), where
> > > the
> > >     upper layers receive and act on those DBUS events.
> > >
> > > I have a preference for the former, but I would be interested to
> > > hear
> > > from the IPMI folks about how the latter structure has worked in
> > > the
> > > past.
> > >
> > > The proposed implementation here is to produce an MCTP "library"
> > > which
> > > provides the packetisation and routing functions, between:
> > >
> > >   - an "upper" messaging transmit/receive interface, for tx/rx of a
> > > full
> > >     message to a specific endpoint
> > >
> > >   - a "lower" hardware binding for transmit/receive of individual
> > >     packets, providing a method for the core to tx/rx each packet
> > > to
> > >     hardware
> > >
> > > The lower interface would be plugged in to one of a number of
> > > hardware-specific binding implementations (most of which would be
> > > included in the library source tree, but others can be plugged-in
> > > too)
> > >
> > > The reason for a library is to allow the same MCTP implementation
> > > to be
> > > used in both OpenBMC and host firmware; the library should be
> > > bidirectional. To allow this, the library would be written in
> > > portable C
> > > (structured in a way that can be compiled as "extern C" in C++
> > > codebases), and be able to be configured to suit those runtime
> > > environments (for example, POSIX IO may not be available on all
> > > platforms; we should be able to compile the library to suit). The
> > > licence for the library should also allow this re-use; I'd suggest
> > > a
> > > dual Apache & GPL licence.
>

Love the idea of the implementation being bidirectional and able to be
dropped onto the host side as well, but I must be missing why that requires
we write it in C.  Are we targeting some platform missing a C++
cross-compiler implementation? If we're implementing something new from
scratch, I'd so much prefer to bump up the maintainability/modernity and
write it in C++ if we can.  Or could be I'm missing some key reason that it
follows we use C :)


> > >
> > > As for the hardware bindings, we would want to implement a serial
> > > transport binding first, to allow easy prototyping in simulation.
> > > For
> > > OpenPOWER, we'd want to implement a "raw LPC" binding for better
> > > performance, and later PCIe for large transfers. I imagine that
> > > there is
> > > a need for an I2C binding implementation for other hardware
> > > platforms
> > > too.
> > >
> > > Lastly, I don't want to exclude any currently-used interfaces by
> > > implementing MCTP - this should be an optional component of
> > > OpenBMC, and
> > > not require platforms to implement it.
> > >
> > > ## Alternatives Considered
> > >
> > > There have been two main alternatives to this approach:
> > >
> > > Continue using IPMI, but start making more use of OEM extensions to
> > > suit the requirements of new platforms. However, given that the
> > > IPMI
> > > standard is no longer under active development, we would likely end
> > > up
> > > with a large amount of platform-specific customisations. This also
> > > does
> > > not solve the hardware channel issues in a standard manner.
> > >
> > > Redfish between host and BMC. This would mean that host firmware
> > > needs a
> > > HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support for
> > > Redfish schema. This is not feasible for all host firmware
> > > implementations; certainly not for OpenPOWER. It's possible that we
> > > could run a simplified Redfish stack - indeed, MCTP has a proposal
> > > for a
> > > Redfish-over-MCTP protocol, which uses simplified serialisation and
> > > no
> > > requirement on HTTP. However, this still introduces a large amount
> > > of
> > > complexity in host firmware.
> > >
> > > ## Impacts
> > >
> > > Development would be required to implement the MCTP transport, plus
> > > any
> > > new users of the MCTP messaging (eg, a PLDM implementation). These
> > > would
> > > somewhat duplicate the work we have in IPMI handlers.
> > >
> > > We'd want to keep IPMI running in parallel, so the "upgrade" path
> > > should
> > > be fairly straightforward.
> > >
> > > Design and development needs to involve potential host firmware
> > > implementations.
> > >
> > > ## Testing
> > >
> > > For the core MCTP library, we are able to run tests there in
> > > complete
> > > isolation (I have already been able to run a prototype MCTP stack
> > > through the afl fuzzer) to ensure that the core transport protocol
> > > works.
> > >
> > > For MCTP hardware bindings, we would develop channel-specific tests
> > > that
> > > would be run in CI on both host and BMC.
> > >
> > > For the OpenBMC MCTP daemon implementation, testing models would
> > > depend
> > > on the structure we adopt in the design section.
> > >
> >
> > Regards,
> > Deepak
> >
>
>

[-- Attachment #2: Type: text/html, Size: 16980 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07 18:53     ` Emily Shaffer
@ 2018-12-07 20:06       ` Supreeth Venkatesh
  2018-12-07 21:19       ` Jeremy Kerr
  2018-12-11  1:14       ` Stewart Smith
  2 siblings, 0 replies; 30+ messages in thread
From: Supreeth Venkatesh @ 2018-12-07 20:06 UTC (permalink / raw)
  To: Emily Shaffer
  Cc: Deepak Kodihalli, Jeremy Kerr, openbmc, David Thompson, Dong Wei,
	Naidoo, Nilan

[-- Attachment #1: Type: text/plain, Size: 16336 bytes --]

On Fri, 2018-12-07 at 10:53 -0800, Emily Shaffer wrote:
> Hi Jeremy,
> I had a couple comments on the initial draft.
> 
> 
> 
> On Fri, Dec 7, 2018 at 9:09 AM Supreeth Venkatesh <
> supreeth.venkatesh@arm.com> wrote:
> > On Fri, 2018-12-07 at 10:43 +0530, Deepak Kodihalli wrote:
> > 
> > > On 07/12/18 8:11 AM, Jeremy Kerr wrote:
> > 
> > > > Hi OpenBMCers!
> > 
> > > > 
> > 
> > > > In an earlier thread, I promised to sketch out a design for a
> > MCTP
> > 
> > > > implementation in OpenBMC, and I've included it below.
> > 
> > > 
> > 
> > > 
> > 
> > > Thanks Jeremy for sending this out. This looks good (have just
> > one 
> > 
> > > comment below).
> > 
> > > 
> > 
> > > Question for everyone : do you have plans to employ PLDM over
> > MCTP?
> > 
> > Yes Deepak. we do eventually.
> > 
> > 
> > 
> > > 
> > 
> > > We are interested in PLDM for various "inside the box"
> > 
> > > communications 
> > 
> > > (at the moment for the Host <-> BMC communication). I'd like to
> > 
> > > propose 
> > 
> > > a design for a PLDM stack on OpenBMC, and would send a design
> > 
> > > template 
> > 
> > > for review on the mailing list in some amount of time (I've just
> > 
> > > started 
> > 
> > > with some initial sketches). I'd like to also know if others
> > have 
> > 
> > > embarked on a similar activity, so that we can collaborate
> > earlier
> > 
> > > and 
> > 
> > > avoid duplicate work.
> > 
> > Yes. Interested to collaborate.
> > 
> > Which portion of PLDM are you working on, other than base?
> > 
> > Platform Monitoring and Control?
> > 
> > Firmware Update?
> > 
> > BIOS Control andConfiguration?
> > 
> > SMBIOS Transfer?
> > 
> > FRU Data?
> > 
> > Redfish Device Enablement?
> > 
> > 
> > 
> > We are currently interested in Platform Monitoring and Control.
> > 
> > 
> > 
> > 
> > 
> > > 
> > 
> > > > This is roughly in the OpenBMC design document format (thanks
> > for
> > 
> > > > the
> > 
> > > > reminder Andrew), but I've sent it to the list for initial
> > review
> > 
> > > > before
> > 
> > > > proposing to gerrit - mainly because there were a lot of folks
> > who
> > 
> > > > expressed interest on the list. I suggest we move to gerrit
> > once we
> > 
> > > > get
> > 
> > > > specific feedback coming in. Let me know if you have general
> > 
> > > > comments
> > 
> > > > whenever you like though.
> > 
> > > > 
> > 
> > > > In parallel, I've been developing a prototype for the MCTP
> > library
> > 
> > > > mentioned below, including a serial transport binding. I'll
> > push to
> > 
> > > > github soon and post a link, once I have it in a
> > 
> > > > slightly-more-consumable form.
> > 
> > > > 
> > 
> > > > Cheers,
> > 
> > > > 
> > 
> > > > 
> > 
> > > > Jeremy
> > 
> > > > 
> > 
> > > > --------------------------------------------------------
> > 
> > > > 
> > 
> > > > # Host/BMC communication channel: MCTP & PLDM
> > 
> > > > 
> > 
> > > > Author: Jeremy Kerr <jk@ozlabs.org> <jk>
> > 
> > > > 
> > 
> > > > ## Problem Description
> > 
> > > > 
> > 
> > > > Currently, we have a few different methods of communication
> > between
> > 
> > > > host
> > 
> > > > and BMC. This is primarily IPMI-based, but also includes a few
> > 
> > > > hardware-specific side-channels, like hiomap. On OpenPOWER
> > hardware
> > 
> > > > at
> > 
> > > > least, we've definitely started to hit some of the limitations
> > of
> > 
> > > > IPMI
> > 
> > > > (for example, we have need for >255 sensors), as well as the
> > 
> > > > hardware
> > 
> > > > channels that IPMI typically uses.
> > 
> > > > 
> > 
> > > > This design aims to use the Management Component Transport
> > Protocol
> > 
> > > > (MCTP) to provide a common transport layer over the multiple
> > 
> > > > channels
> > 
> > > > that OpenBMC platforms provide. Then, on top of MCTP, we have
> > the
> > 
> > > > opportunity to move to newer host/BMC messaging protocols to
> > 
> > > > overcome
> > 
> > > > some of the limitations we've encountered with IPMI.
> > 
> > > > 
> > 
> > > > ## Background and References
> > 
> > > > 
> > 
> > > > Separating the "transport" and "messaging protocol" parts of
> > the
> > 
> > > > current
> > 
> > > > stack allows us to design these parts separately. Currently,
> > IPMI
> > 
> > > > defines both of these; we currently have BT and KCS (both
> > defined
> > 
> > > > as
> > 
> > > > part of the IPMI 2.0 standard) as the transports, and IPMI
> > itself
> > 
> > > > as the
> > 
> > > > messaging protocol.
> > 
> > > > 
> > 
> > > > Some efforts of improving the hardware transport mechanism of
> > IPMI
> > 
> > > > have
> > 
> > > > been attempted, but not in a cross-implementation manner so
> > far.
> > 
> > > > This
> > 
> > > > does not address some of the limitations of the IPMI data
> > model.
> > 
> > > > 
> > 
> > > > MCTP defines a standard transport protocol, plus a number of
> > 
> > > > separate
> > 
> > > > hardware bindings for the actual transport of MCTP packets.
> > These
> > 
> > > > are
> > 
> > > > defined by the DMTF's Platform Management Working group;
> > standards
> > 
> > > > are
> > 
> > > > available at:
> > 
> > > > 
> > 
> > > >    https://www.dmtf.org/standards/pmci
> > 
> > > > 
> > 
> > > > I have included a small diagram of how these standards may fit
> > 
> > > > together
> > 
> > > > in an OpenBMC system. The DSP numbers there are references to
> > DMTF
> > 
> > > > standards.
> > 
> > > > 
> > 
> > > > One of the key concepts here is that separation of transport
> > 
> > > > protocol
> > 
> > > > from the hardware bindings; this means that an MCTP "stack" may
> > be
> > 
> > > > using
> > 
> > > > either a I2C, PCI, Serial or custom hardware channel, without
> > the
> > 
> > > > higher
> > 
> > > > layers of that stack needing to be aware of the hardware
> > 
> > > > implementation.
> > 
> > > > These higher levels only need to be aware that they are
> > 
> > > > communicating
> > 
> > > > with a certain entity, defined by an Entity ID (MCTP EID).
> > 
> > > > 
> > 
> > > > I've mainly focussed on the "transport" part of the design
> > here.
> > 
> > > > While
> > 
> > > > this does enable new messaging protocols (mainly PLDM), I
> > haven't
> > 
> > > > covered that much; we will propose those details for a separate
> > 
> > > > design
> > 
> > > > effort.
> > 
> > > > 
> > 
> > > > As part of the design, I have referred to MCTP "messages" and
> > 
> > > > "packets";
> > 
> > > > this is intentional, to match the definitions in the MCTP
> > standard.
> > 
> > > > MCTP
> > 
> > > > messages are the higher-level data transferred between MCTP
> > 
> > > > endpoints,
> > 
> > > > which packets are typically smaller, and are what is sent over
> > the
> > 
> > > > hardware. Messages that are larger than the hardware MTU are
> > split
> > 
> > > > into
> > 
> > > > individual packets by the transmit implementation, and
> > reassembled
> > 
> > > > at
> > 
> > > > the receive implementation.
> > 
> > > > 
> > 
> > > > A final important point is that this design is for the host <
> > -->
> > 
> > > > BMC
> > 
> > > > channel *only*. Even if we do replace IPMI for the host
> > interface,
> > 
> > > > we
> > 
> > > > will certainly need an IPMI interface available for external
> > system
> > 
> > > > management.
> 
> I'm not sure it's correct to demand external IPMI. Most of OpenBMC
> community (ourselves excluded :( ) is turning towards Redfish for
> this role.  The external-facing IPMI specification is insecure at the
> standard level, so I don't think that we should be encouraging or
> requiring it anywhere, even in an unrelated doc.
> tl;dr I think you should be more generic here and not specify IPMI
> for external mgmt
Agree. There should be scope for other external interfaces other than
just IPMI and Redfish seems to fit the bill.
> > > > 
> > 
> > > > ## Requirements
> > 
> > > > 
> > 
> > > > Any channel between host and BMC should:
> > 
> > > > 
> > 
> > > >   - Have a simple serialisation and deserialisation format, to
> > 
> > > > enable
> > 
> > > >     implementations in host firmware, which have widely varying
> > 
> > > > runtime
> > 
> > > >     capabilities
> > 
> > > > 
> > 
> > > >   - Allow different hardware channels, as we have a wide
> > variety of
> > 
> > > >     target platforms for OpenBMC
> > 
> > > > 
> > 
> > > >   - Be usable over simple hardware implementations, but have a
> > 
> > > > facility
> > 
> > > >     for higher bandwidth messaging on platforms that require
> > it.
> > 
> > > > 
> > 
> > > >   - Ideally, integrate with newer messaging protocols
> > 
> > > > 
> > 
> > > > ## Proposed Design
> > 
> > > > 
> > 
> > > > The MCTP core specification just provides the packetisation,
> > 
> > > > routing and
> > 
> > > > addressing mechanisms. The actual transmit/receive of those
> > packets
> > 
> > > > is
> > 
> > > > up to the hardware binding of the MCTP transport.
> > 
> > > > 
> > 
> > > > For OpenBMC, we would introduce an MCTP daemon, which
> > implements
> > 
> > > > the
> > 
> > > > transport over a configurable hardware channel (eg., Serial
> > UART,
> > 
> > > > I2C or
> > 
> > > > PCI). This daemon is responsible for the packetisation and
> > routing
> > 
> > > > of
> > 
> > > > MCTP messages to and from host firmware.
> > 
> > > > 
> > 
> > > > I see two options for the "inbound" or "application" interface
> > of
> > 
> > > > the
> > 
> > > > MCTP daemon:
> > 
> > > > 
> > 
> > > >   - it could handle upper parts of the stack (eg PLDM)
> > directly,
> > 
> > > > through
> > 
> > > >     in-process handlers that register for certain MCTP message
> > 
> > > > types; or
> > 
> > > 
> > 
> > > We'd like to somehow ensure (at least via documentation) that
> > the 
> > 
> > > handlers don't block the MCTP daemon from processing incoming
> > 
> > > traffic. 
> > 
> > > The handlers might anyway end up making IPC calls (via D-Bus) to
> > 
> > > other 
> > 
> > > processes. The second approach below seems to alleviate this
> > problem.
> > 
> > > 
> > 
> > > >   - it could channel raw MCTP messages (reassembled from MCTP
> > 
> > > > packets) to
> > 
> > > >     DBUS messages (similar to the current IPMI host daemons),
> > where
> > 
> > > > the
> > 
> > > >     upper layers receive and act on those DBUS events.
> > 
> > > > 
> > 
> > > > I have a preference for the former, but I would be interested
> > to
> > 
> > > > hear
> > 
> > > > from the IPMI folks about how the latter structure has worked
> > in
> > 
> > > > the
> > 
> > > > past.
> > 
> > > > 
> > 
> > > > The proposed implementation here is to produce an MCTP
> > "library"
> > 
> > > > which
> > 
> > > > provides the packetisation and routing functions, between:
> > 
> > > > 
> > 
> > > >   - an "upper" messaging transmit/receive interface, for tx/rx
> > of a
> > 
> > > > full
> > 
> > > >     message to a specific endpoint
> > 
> > > > 
> > 
> > > >   - a "lower" hardware binding for transmit/receive of
> > individual
> > 
> > > >     packets, providing a method for the core to tx/rx each
> > packet
> > 
> > > > to
> > 
> > > >     hardware
> > 
> > > > 
> > 
> > > > The lower interface would be plugged in to one of a number of
> > 
> > > > hardware-specific binding implementations (most of which would
> > be
> > 
> > > > included in the library source tree, but others can be plugged-
> > in
> > 
> > > > too)
> > 
> > > > 
> > 
> > > > The reason for a library is to allow the same MCTP
> > implementation
> > 
> > > > to be
> > 
> > > > used in both OpenBMC and host firmware; the library should be
> > 
> > > > bidirectional. To allow this, the library would be written in
> > 
> > > > portable C
> > 
> > > > (structured in a way that can be compiled as "extern C" in C++
> > 
> > > > codebases), and be able to be configured to suit those runtime
> > 
> > > > environments (for example, POSIX IO may not be available on all
> > 
> > > > platforms; we should be able to compile the library to suit).
> > The
> > 
> > > > licence for the library should also allow this re-use; I'd
> > suggest
> > 
> > > > a
> > 
> > > > dual Apache & GPL licence.
> 
> Love the idea of the implementation being bidirectional and able to
> be dropped onto the host side as well, but I must be missing why that
> requires we write it in C.  Are we targeting some platform missing a
> C++ cross-compiler implementation? If we're implementing something
> new from scratch, I'd so much prefer to bump up the
> maintainability/modernity and write it in C++ if we can.  Or could be
> I'm missing some key reason that it follows we use C :)
Jeremy may have a different reason.However, in my opinion, If we do
want to implement this on the host/SOC firmware also  (for e.g., arm
trusted firmware or uefi), it would be nice to have this library
written in C, so that way it would be easier to port this code into
host/SOC firmware with minimal porting effort.In our case, Arm Server
SOC firmware is almost entirely "C".
>  
> > > > 
> > 
> > > > As for the hardware bindings, we would want to implement a
> > serial
> > 
> > > > transport binding first, to allow easy prototyping in
> > simulation.
> > 
> > > > For
> > 
> > > > OpenPOWER, we'd want to implement a "raw LPC" binding for
> > better
> > 
> > > > performance, and later PCIe for large transfers. I imagine that
> > 
> > > > there is
> > 
> > > > a need for an I2C binding implementation for other hardware
> > 
> > > > platforms
> > 
> > > > too.
> > 
> > > > 
> > 
> > > > Lastly, I don't want to exclude any currently-used interfaces
> > by
> > 
> > > > implementing MCTP - this should be an optional component of
> > 
> > > > OpenBMC, and
> > 
> > > > not require platforms to implement it.
> > 
> > > > 
> > 
> > > > ## Alternatives Considered
> > 
> > > > 
> > 
> > > > There have been two main alternatives to this approach:
> > 
> > > > 
> > 
> > > > Continue using IPMI, but start making more use of OEM
> > extensions to
> > 
> > > > suit the requirements of new platforms. However, given that the
> > 
> > > > IPMI
> > 
> > > > standard is no longer under active development, we would likely
> > end
> > 
> > > > up
> > 
> > > > with a large amount of platform-specific customisations. This
> > also
> > 
> > > > does
> > 
> > > > not solve the hardware channel issues in a standard manner.
> > 
> > > > 
> > 
> > > > Redfish between host and BMC. This would mean that host
> > firmware
> > 
> > > > needs a
> > 
> > > > HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support
> > for
> > 
> > > > Redfish schema. This is not feasible for all host firmware
> > 
> > > > implementations; certainly not for OpenPOWER. It's possible
> > that we
> > 
> > > > could run a simplified Redfish stack - indeed, MCTP has a
> > proposal
> > 
> > > > for a
> > 
> > > > Redfish-over-MCTP protocol, which uses simplified serialisation
> > and
> > 
> > > > no
> > 
> > > > requirement on HTTP. However, this still introduces a large
> > amount
> > 
> > > > of
> > 
> > > > complexity in host firmware.
> > 
> > > > 
> > 
> > > > ## Impacts
> > 
> > > > 
> > 
> > > > Development would be required to implement the MCTP transport,
> > plus
> > 
> > > > any
> > 
> > > > new users of the MCTP messaging (eg, a PLDM implementation).
> > These
> > 
> > > > would
> > 
> > > > somewhat duplicate the work we have in IPMI handlers.
> > 
> > > > 
> > 
> > > > We'd want to keep IPMI running in parallel, so the "upgrade"
> > path
> > 
> > > > should
> > 
> > > > be fairly straightforward.
> > 
> > > > 
> > 
> > > > Design and development needs to involve potential host firmware
> > 
> > > > implementations.
> > 
> > > > 
> > 
> > > > ## Testing
> > 
> > > > 
> > 
> > > > For the core MCTP library, we are able to run tests there in
> > 
> > > > complete
> > 
> > > > isolation (I have already been able to run a prototype MCTP
> > stack
> > 
> > > > through the afl fuzzer) to ensure that the core transport
> > protocol
> > 
> > > > works.
> > 
> > > > 
> > 
> > > > For MCTP hardware bindings, we would develop channel-specific
> > tests
> > 
> > > > that
> > 
> > > > would be run in CI on both host and BMC.
> > 
> > > > 
> > 
> > > > For the OpenBMC MCTP daemon implementation, testing models
> > would
> > 
> > > > depend
> > 
> > > > on the structure we adopt in the design section.
> > 
> > > > 
> > 
> > > 
> > 
> > > Regards,
> > 
> > > Deepak
> > 
> > > 
> > 
> > 
> > 

[-- Attachment #2: Type: text/html, Size: 18210 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07 18:53     ` Emily Shaffer
  2018-12-07 20:06       ` Supreeth Venkatesh
@ 2018-12-07 21:19       ` Jeremy Kerr
  2018-12-11  1:14       ` Stewart Smith
  2 siblings, 0 replies; 30+ messages in thread
From: Jeremy Kerr @ 2018-12-07 21:19 UTC (permalink / raw)
  To: Emily Shaffer, Supreeth Venkatesh
  Cc: Deepak Kodihalli, openbmc, David Thompson, Dong Wei, Naidoo, Nilan

Hi Emily,

> I had a couple comments on the initial draft.

Super, thanks for taking a look.

>      > > A final important point is that this design is for the host <-->
>      > > BMC
>      > > channel *only*. Even if we do replace IPMI for the host interface,
>      > > we
>      > > will certainly need an IPMI interface available for external system
>      > > management.
> 
> 
> I'm not sure it's correct to demand external IPMI. Most of OpenBMC 
> community (ourselves excluded :( ) is turning towards Redfish for this 
> role.  The external-facing IPMI specification is insecure at the 
> standard level, so I don't think that we should be encouraging or 
> requiring it anywhere, even in an unrelated doc.
> tl;dr I think you should be more generic here and not specify IPMI for 
> external mgmt

OK. I didn't intend to introduce any requirement on IPMI there, even 
implied, so I'll reword.

However, I think there is still a lot of external dependence on an IPMI 
interface - it'll be a while before everyone rewrites all of those 
custom IPMI-based in-house provisioning systems that are out there.

> Love the idea of the implementation being bidirectional and able to be 
> dropped onto the host side as well, but I must be missing why that 
> requires we write it in C.  Are we targeting some platform missing a C++ 
> cross-compiler implementation? If we're implementing something new from 
> scratch, I'd so much prefer to bump up the maintainability/modernity and 
> write it in C++ if we can.  Or could be I'm missing some key reason that 
> it follows we use C :)

We (and others) have host firmware written in C.

We can always link C to C++, but not the other way around, and it'd be 
fairly straightforward to implement a C++ wrapper for this, which could 
even live in-tree.

Cheers,


Jeremy

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07 17:09   ` Supreeth Venkatesh
  2018-12-07 18:53     ` Emily Shaffer
@ 2018-12-10  6:14     ` Deepak Kodihalli
  2018-12-10 17:40       ` Supreeth Venkatesh
  1 sibling, 1 reply; 30+ messages in thread
From: Deepak Kodihalli @ 2018-12-10  6:14 UTC (permalink / raw)
  To: Supreeth Venkatesh, Jeremy Kerr, openbmc
  Cc: Emily Shaffer, David Thompson, Dong Wei, Naidoo, Nilan

On 07/12/18 10:39 PM, Supreeth Venkatesh wrote:
> On Fri, 2018-12-07 at 10:43 +0530, Deepak Kodihalli wrote:
>> On 07/12/18 8:11 AM, Jeremy Kerr wrote:
>>> Hi OpenBMCers!
>>>
>>> In an earlier thread, I promised to sketch out a design for a MCTP
>>> implementation in OpenBMC, and I've included it below.
>>
>>
>> Thanks Jeremy for sending this out. This looks good (have just one
>> comment below).
>>
>> Question for everyone : do you have plans to employ PLDM over MCTP?
> Yes Deepak. we do eventually.


Thanks for letting me know Supreeth!

>>
>> We are interested in PLDM for various "inside the box"
>> communications
>> (at the moment for the Host <-> BMC communication). I'd like to
>> propose
>> a design for a PLDM stack on OpenBMC, and would send a design
>> template
>> for review on the mailing list in some amount of time (I've just
>> started
>> with some initial sketches). I'd like to also know if others have
>> embarked on a similar activity, so that we can collaborate earlier
>> and
>> avoid duplicate work.
> Yes. Interested to collaborate.
> Which portion of PLDM are you working on, other than base?
> Platform Monitoring and Control?
> Firmware Update?
> BIOS Control andConfiguration?
> SMBIOS Transfer?
> FRU Data?
> Redfish Device Enablement?
> 
> We are currently interested in Platform Monitoring and Control.


We're interested in each of these profiles for the BMC host 
communications. Are you interested in Platform monitoring and control 
for communications involving the BMC and the host firmware, or the BMC 
and other devices?

Also, I have been thinking about the usefulness/feasibility of a common 
PLDM library (just the protocol piece - encoding and decoding PLDM 
messages), so as to be able to share code between BMC and host firmware. 
This of course sets expectations on the library based on OpenBMC and 
various host firmware stacks. Do you have an opinion on this?

>>
>>> This is roughly in the OpenBMC design document format (thanks for
>>> the
>>> reminder Andrew), but I've sent it to the list for initial review
>>> before
>>> proposing to gerrit - mainly because there were a lot of folks who
>>> expressed interest on the list. I suggest we move to gerrit once we
>>> get
>>> specific feedback coming in. Let me know if you have general
>>> comments
>>> whenever you like though.
>>>
>>> In parallel, I've been developing a prototype for the MCTP library
>>> mentioned below, including a serial transport binding. I'll push to
>>> github soon and post a link, once I have it in a
>>> slightly-more-consumable form.
>>>
>>> Cheers,
>>>
>>>
>>> Jeremy
>>>
>>> --------------------------------------------------------
>>>
>>> # Host/BMC communication channel: MCTP & PLDM
>>>
>>> Author: Jeremy Kerr <jk@ozlabs.org> <jk>
>>>
>>> ## Problem Description
>>>
>>> Currently, we have a few different methods of communication between
>>> host
>>> and BMC. This is primarily IPMI-based, but also includes a few
>>> hardware-specific side-channels, like hiomap. On OpenPOWER hardware
>>> at
>>> least, we've definitely started to hit some of the limitations of
>>> IPMI
>>> (for example, we have need for >255 sensors), as well as the
>>> hardware
>>> channels that IPMI typically uses.
>>>
>>> This design aims to use the Management Component Transport Protocol
>>> (MCTP) to provide a common transport layer over the multiple
>>> channels
>>> that OpenBMC platforms provide. Then, on top of MCTP, we have the
>>> opportunity to move to newer host/BMC messaging protocols to
>>> overcome
>>> some of the limitations we've encountered with IPMI.
>>>
>>> ## Background and References
>>>
>>> Separating the "transport" and "messaging protocol" parts of the
>>> current
>>> stack allows us to design these parts separately. Currently, IPMI
>>> defines both of these; we currently have BT and KCS (both defined
>>> as
>>> part of the IPMI 2.0 standard) as the transports, and IPMI itself
>>> as the
>>> messaging protocol.
>>>
>>> Some efforts of improving the hardware transport mechanism of IPMI
>>> have
>>> been attempted, but not in a cross-implementation manner so far.
>>> This
>>> does not address some of the limitations of the IPMI data model.
>>>
>>> MCTP defines a standard transport protocol, plus a number of
>>> separate
>>> hardware bindings for the actual transport of MCTP packets. These
>>> are
>>> defined by the DMTF's Platform Management Working group; standards
>>> are
>>> available at:
>>>
>>>     https://www.dmtf.org/standards/pmci
>>>
>>> I have included a small diagram of how these standards may fit
>>> together
>>> in an OpenBMC system. The DSP numbers there are references to DMTF
>>> standards.
>>>
>>> One of the key concepts here is that separation of transport
>>> protocol
>>> from the hardware bindings; this means that an MCTP "stack" may be
>>> using
>>> either a I2C, PCI, Serial or custom hardware channel, without the
>>> higher
>>> layers of that stack needing to be aware of the hardware
>>> implementation.
>>> These higher levels only need to be aware that they are
>>> communicating
>>> with a certain entity, defined by an Entity ID (MCTP EID).
>>>
>>> I've mainly focussed on the "transport" part of the design here.
>>> While
>>> this does enable new messaging protocols (mainly PLDM), I haven't
>>> covered that much; we will propose those details for a separate
>>> design
>>> effort.
>>>
>>> As part of the design, I have referred to MCTP "messages" and
>>> "packets";
>>> this is intentional, to match the definitions in the MCTP standard.
>>> MCTP
>>> messages are the higher-level data transferred between MCTP
>>> endpoints,
>>> which packets are typically smaller, and are what is sent over the
>>> hardware. Messages that are larger than the hardware MTU are split
>>> into
>>> individual packets by the transmit implementation, and reassembled
>>> at
>>> the receive implementation.
>>>
>>> A final important point is that this design is for the host <-->
>>> BMC
>>> channel *only*. Even if we do replace IPMI for the host interface,
>>> we
>>> will certainly need an IPMI interface available for external system
>>> management.
>>>
>>> ## Requirements
>>>
>>> Any channel between host and BMC should:
>>>
>>>    - Have a simple serialisation and deserialisation format, to
>>> enable
>>>      implementations in host firmware, which have widely varying
>>> runtime
>>>      capabilities
>>>
>>>    - Allow different hardware channels, as we have a wide variety of
>>>      target platforms for OpenBMC
>>>
>>>    - Be usable over simple hardware implementations, but have a
>>> facility
>>>      for higher bandwidth messaging on platforms that require it.
>>>
>>>    - Ideally, integrate with newer messaging protocols
>>>
>>> ## Proposed Design
>>>
>>> The MCTP core specification just provides the packetisation,
>>> routing and
>>> addressing mechanisms. The actual transmit/receive of those packets
>>> is
>>> up to the hardware binding of the MCTP transport.
>>>
>>> For OpenBMC, we would introduce an MCTP daemon, which implements
>>> the
>>> transport over a configurable hardware channel (eg., Serial UART,
>>> I2C or
>>> PCI). This daemon is responsible for the packetisation and routing
>>> of
>>> MCTP messages to and from host firmware.
>>>
>>> I see two options for the "inbound" or "application" interface of
>>> the
>>> MCTP daemon:
>>>
>>>    - it could handle upper parts of the stack (eg PLDM) directly,
>>> through
>>>      in-process handlers that register for certain MCTP message
>>> types; or
>>
>> We'd like to somehow ensure (at least via documentation) that the
>> handlers don't block the MCTP daemon from processing incoming
>> traffic.
>> The handlers might anyway end up making IPC calls (via D-Bus) to
>> other
>> processes. The second approach below seems to alleviate this problem.
>>
>>>    - it could channel raw MCTP messages (reassembled from MCTP
>>> packets) to
>>>      DBUS messages (similar to the current IPMI host daemons), where
>>> the
>>>      upper layers receive and act on those DBUS events.
>>>
>>> I have a preference for the former, but I would be interested to
>>> hear
>>> from the IPMI folks about how the latter structure has worked in
>>> the
>>> past.
>>>
>>> The proposed implementation here is to produce an MCTP "library"
>>> which
>>> provides the packetisation and routing functions, between:
>>>
>>>    - an "upper" messaging transmit/receive interface, for tx/rx of a
>>> full
>>>      message to a specific endpoint
>>>
>>>    - a "lower" hardware binding for transmit/receive of individual
>>>      packets, providing a method for the core to tx/rx each packet
>>> to
>>>      hardware
>>>
>>> The lower interface would be plugged in to one of a number of
>>> hardware-specific binding implementations (most of which would be
>>> included in the library source tree, but others can be plugged-in
>>> too)
>>>
>>> The reason for a library is to allow the same MCTP implementation
>>> to be
>>> used in both OpenBMC and host firmware; the library should be
>>> bidirectional. To allow this, the library would be written in
>>> portable C
>>> (structured in a way that can be compiled as "extern C" in C++
>>> codebases), and be able to be configured to suit those runtime
>>> environments (for example, POSIX IO may not be available on all
>>> platforms; we should be able to compile the library to suit). The
>>> licence for the library should also allow this re-use; I'd suggest
>>> a
>>> dual Apache & GPL licence.
>>>
>>> As for the hardware bindings, we would want to implement a serial
>>> transport binding first, to allow easy prototyping in simulation.
>>> For
>>> OpenPOWER, we'd want to implement a "raw LPC" binding for better
>>> performance, and later PCIe for large transfers. I imagine that
>>> there is
>>> a need for an I2C binding implementation for other hardware
>>> platforms
>>> too.
>>>
>>> Lastly, I don't want to exclude any currently-used interfaces by
>>> implementing MCTP - this should be an optional component of
>>> OpenBMC, and
>>> not require platforms to implement it.
>>>
>>> ## Alternatives Considered
>>>
>>> There have been two main alternatives to this approach:
>>>
>>> Continue using IPMI, but start making more use of OEM extensions to
>>> suit the requirements of new platforms. However, given that the
>>> IPMI
>>> standard is no longer under active development, we would likely end
>>> up
>>> with a large amount of platform-specific customisations. This also
>>> does
>>> not solve the hardware channel issues in a standard manner.
>>>
>>> Redfish between host and BMC. This would mean that host firmware
>>> needs a
>>> HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support for
>>> Redfish schema. This is not feasible for all host firmware
>>> implementations; certainly not for OpenPOWER. It's possible that we
>>> could run a simplified Redfish stack - indeed, MCTP has a proposal
>>> for a
>>> Redfish-over-MCTP protocol, which uses simplified serialisation and
>>> no
>>> requirement on HTTP. However, this still introduces a large amount
>>> of
>>> complexity in host firmware.
>>>
>>> ## Impacts
>>>
>>> Development would be required to implement the MCTP transport, plus
>>> any
>>> new users of the MCTP messaging (eg, a PLDM implementation). These
>>> would
>>> somewhat duplicate the work we have in IPMI handlers.
>>>
>>> We'd want to keep IPMI running in parallel, so the "upgrade" path
>>> should
>>> be fairly straightforward.
>>>
>>> Design and development needs to involve potential host firmware
>>> implementations.
>>>
>>> ## Testing
>>>
>>> For the core MCTP library, we are able to run tests there in
>>> complete
>>> isolation (I have already been able to run a prototype MCTP stack
>>> through the afl fuzzer) to ensure that the core transport protocol
>>> works.
>>>
>>> For MCTP hardware bindings, we would develop channel-specific tests
>>> that
>>> would be run in CI on both host and BMC.
>>>
>>> For the OpenBMC MCTP daemon implementation, testing models would
>>> depend
>>> on the structure we adopt in the design section.
>>>
>>
>> Regards,
>> Deepak
>>
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-10  6:14     ` Deepak Kodihalli
@ 2018-12-10 17:40       ` Supreeth Venkatesh
  2018-12-11  7:38         ` Deepak Kodihalli
  0 siblings, 1 reply; 30+ messages in thread
From: Supreeth Venkatesh @ 2018-12-10 17:40 UTC (permalink / raw)
  To: Deepak Kodihalli, Jeremy Kerr, openbmc
  Cc: Emily Shaffer, David Thompson, Dong Wei, Naidoo, Nilan

On Mon, 2018-12-10 at 11:44 +0530, Deepak Kodihalli wrote:
> On 07/12/18 10:39 PM, Supreeth Venkatesh wrote:
> > On Fri, 2018-12-07 at 10:43 +0530, Deepak Kodihalli wrote:
> > > On 07/12/18 8:11 AM, Jeremy Kerr wrote:
> > > > Hi OpenBMCers!
> > > > 
> > > > In an earlier thread, I promised to sketch out a design for a
> > > > MCTP
> > > > implementation in OpenBMC, and I've included it below.
> > > 
> > > 
> > > Thanks Jeremy for sending this out. This looks good (have just
> > > one
> > > comment below).
> > > 
> > > Question for everyone : do you have plans to employ PLDM over
> > > MCTP?
> > 
> > Yes Deepak. we do eventually.
> 
> 
> Thanks for letting me know Supreeth!
My pleasure.

> 
> > > 
> > > We are interested in PLDM for various "inside the box"
> > > communications
> > > (at the moment for the Host <-> BMC communication). I'd like to
> > > propose
> > > a design for a PLDM stack on OpenBMC, and would send a design
> > > template
> > > for review on the mailing list in some amount of time (I've just
> > > started
> > > with some initial sketches). I'd like to also know if others have
> > > embarked on a similar activity, so that we can collaborate
> > > earlier
> > > and
> > > avoid duplicate work.
> > 
> > Yes. Interested to collaborate.
> > Which portion of PLDM are you working on, other than base?
> > Platform Monitoring and Control?
> > Firmware Update?
> > BIOS Control andConfiguration?
> > SMBIOS Transfer?
> > FRU Data?
> > Redfish Device Enablement?
> > 
> > We are currently interested in Platform Monitoring and Control.
> 
> 
> We're interested in each of these profiles for the BMC host 
> communications. Are you interested in Platform monitoring and
> control 
> for communications involving the BMC and the host firmware, or the
> BMC 
> and other devices?
BMC and the host firmware initially.

> 
> Also, I have been thinking about the usefulness/feasibility of a
> common 
> PLDM library (just the protocol piece - encoding and decoding PLDM 
> messages), so as to be able to share code between BMC and host
> firmware. 
> This of course sets expectations on the library based on OpenBMC and 
> various host firmware stacks. Do you have an opinion on this?
Glad that we are on the same page.
My thinking at this point is to come up with a generic standalone "C"
library which processes PLDM messages without regard to whether this
message contains payload for Sensors, firmware update, etc., so that it
can be ported to Host firmware if needed.


> 
> > > 
> > > > This is roughly in the OpenBMC design document format (thanks
> > > > for
> > > > the
> > > > reminder Andrew), but I've sent it to the list for initial
> > > > review
> > > > before
> > > > proposing to gerrit - mainly because there were a lot of folks
> > > > who
> > > > expressed interest on the list. I suggest we move to gerrit
> > > > once we
> > > > get
> > > > specific feedback coming in. Let me know if you have general
> > > > comments
> > > > whenever you like though.
> > > > 
> > > > In parallel, I've been developing a prototype for the MCTP
> > > > library
> > > > mentioned below, including a serial transport binding. I'll
> > > > push to
> > > > github soon and post a link, once I have it in a
> > > > slightly-more-consumable form.
> > > > 
> > > > Cheers,
> > > > 
> > > > 
> > > > Jeremy
> > > > 
> > > > --------------------------------------------------------
> > > > 
> > > > # Host/BMC communication channel: MCTP & PLDM
> > > > 
> > > > Author: Jeremy Kerr <jk@ozlabs.org> <jk>
> > > > 
> > > > ## Problem Description
> > > > 
> > > > Currently, we have a few different methods of communication
> > > > between
> > > > host
> > > > and BMC. This is primarily IPMI-based, but also includes a few
> > > > hardware-specific side-channels, like hiomap. On OpenPOWER
> > > > hardware
> > > > at
> > > > least, we've definitely started to hit some of the limitations
> > > > of
> > > > IPMI
> > > > (for example, we have need for >255 sensors), as well as the
> > > > hardware
> > > > channels that IPMI typically uses.
> > > > 
> > > > This design aims to use the Management Component Transport
> > > > Protocol
> > > > (MCTP) to provide a common transport layer over the multiple
> > > > channels
> > > > that OpenBMC platforms provide. Then, on top of MCTP, we have
> > > > the
> > > > opportunity to move to newer host/BMC messaging protocols to
> > > > overcome
> > > > some of the limitations we've encountered with IPMI.
> > > > 
> > > > ## Background and References
> > > > 
> > > > Separating the "transport" and "messaging protocol" parts of
> > > > the
> > > > current
> > > > stack allows us to design these parts separately. Currently,
> > > > IPMI
> > > > defines both of these; we currently have BT and KCS (both
> > > > defined
> > > > as
> > > > part of the IPMI 2.0 standard) as the transports, and IPMI
> > > > itself
> > > > as the
> > > > messaging protocol.
> > > > 
> > > > Some efforts of improving the hardware transport mechanism of
> > > > IPMI
> > > > have
> > > > been attempted, but not in a cross-implementation manner so
> > > > far.
> > > > This
> > > > does not address some of the limitations of the IPMI data
> > > > model.
> > > > 
> > > > MCTP defines a standard transport protocol, plus a number of
> > > > separate
> > > > hardware bindings for the actual transport of MCTP packets.
> > > > These
> > > > are
> > > > defined by the DMTF's Platform Management Working group;
> > > > standards
> > > > are
> > > > available at:
> > > > 
> > > >     https://www.dmtf.org/standards/pmci
> > > > 
> > > > I have included a small diagram of how these standards may fit
> > > > together
> > > > in an OpenBMC system. The DSP numbers there are references to
> > > > DMTF
> > > > standards.
> > > > 
> > > > One of the key concepts here is that separation of transport
> > > > protocol
> > > > from the hardware bindings; this means that an MCTP "stack" may
> > > > be
> > > > using
> > > > either a I2C, PCI, Serial or custom hardware channel, without
> > > > the
> > > > higher
> > > > layers of that stack needing to be aware of the hardware
> > > > implementation.
> > > > These higher levels only need to be aware that they are
> > > > communicating
> > > > with a certain entity, defined by an Entity ID (MCTP EID).
> > > > 
> > > > I've mainly focussed on the "transport" part of the design
> > > > here.
> > > > While
> > > > this does enable new messaging protocols (mainly PLDM), I
> > > > haven't
> > > > covered that much; we will propose those details for a separate
> > > > design
> > > > effort.
> > > > 
> > > > As part of the design, I have referred to MCTP "messages" and
> > > > "packets";
> > > > this is intentional, to match the definitions in the MCTP
> > > > standard.
> > > > MCTP
> > > > messages are the higher-level data transferred between MCTP
> > > > endpoints,
> > > > which packets are typically smaller, and are what is sent over
> > > > the
> > > > hardware. Messages that are larger than the hardware MTU are
> > > > split
> > > > into
> > > > individual packets by the transmit implementation, and
> > > > reassembled
> > > > at
> > > > the receive implementation.
> > > > 
> > > > A final important point is that this design is for the host <
> > > > -->
> > > > BMC
> > > > channel *only*. Even if we do replace IPMI for the host
> > > > interface,
> > > > we
> > > > will certainly need an IPMI interface available for external
> > > > system
> > > > management.
> > > > 
> > > > ## Requirements
> > > > 
> > > > Any channel between host and BMC should:
> > > > 
> > > >    - Have a simple serialisation and deserialisation format, to
> > > > enable
> > > >      implementations in host firmware, which have widely
> > > > varying
> > > > runtime
> > > >      capabilities
> > > > 
> > > >    - Allow different hardware channels, as we have a wide
> > > > variety of
> > > >      target platforms for OpenBMC
> > > > 
> > > >    - Be usable over simple hardware implementations, but have a
> > > > facility
> > > >      for higher bandwidth messaging on platforms that require
> > > > it.
> > > > 
> > > >    - Ideally, integrate with newer messaging protocols
> > > > 
> > > > ## Proposed Design
> > > > 
> > > > The MCTP core specification just provides the packetisation,
> > > > routing and
> > > > addressing mechanisms. The actual transmit/receive of those
> > > > packets
> > > > is
> > > > up to the hardware binding of the MCTP transport.
> > > > 
> > > > For OpenBMC, we would introduce an MCTP daemon, which
> > > > implements
> > > > the
> > > > transport over a configurable hardware channel (eg., Serial
> > > > UART,
> > > > I2C or
> > > > PCI). This daemon is responsible for the packetisation and
> > > > routing
> > > > of
> > > > MCTP messages to and from host firmware.
> > > > 
> > > > I see two options for the "inbound" or "application" interface
> > > > of
> > > > the
> > > > MCTP daemon:
> > > > 
> > > >    - it could handle upper parts of the stack (eg PLDM)
> > > > directly,
> > > > through
> > > >      in-process handlers that register for certain MCTP message
> > > > types; or
> > > 
> > > We'd like to somehow ensure (at least via documentation) that the
> > > handlers don't block the MCTP daemon from processing incoming
> > > traffic.
> > > The handlers might anyway end up making IPC calls (via D-Bus) to
> > > other
> > > processes. The second approach below seems to alleviate this
> > > problem.
> > > 
> > > >    - it could channel raw MCTP messages (reassembled from MCTP
> > > > packets) to
> > > >      DBUS messages (similar to the current IPMI host daemons),
> > > > where
> > > > the
> > > >      upper layers receive and act on those DBUS events.
> > > > 
> > > > I have a preference for the former, but I would be interested
> > > > to
> > > > hear
> > > > from the IPMI folks about how the latter structure has worked
> > > > in
> > > > the
> > > > past.
> > > > 
> > > > The proposed implementation here is to produce an MCTP
> > > > "library"
> > > > which
> > > > provides the packetisation and routing functions, between:
> > > > 
> > > >    - an "upper" messaging transmit/receive interface, for tx/rx
> > > > of a
> > > > full
> > > >      message to a specific endpoint
> > > > 
> > > >    - a "lower" hardware binding for transmit/receive of
> > > > individual
> > > >      packets, providing a method for the core to tx/rx each
> > > > packet
> > > > to
> > > >      hardware
> > > > 
> > > > The lower interface would be plugged in to one of a number of
> > > > hardware-specific binding implementations (most of which would
> > > > be
> > > > included in the library source tree, but others can be plugged-
> > > > in
> > > > too)
> > > > 
> > > > The reason for a library is to allow the same MCTP
> > > > implementation
> > > > to be
> > > > used in both OpenBMC and host firmware; the library should be
> > > > bidirectional. To allow this, the library would be written in
> > > > portable C
> > > > (structured in a way that can be compiled as "extern C" in C++
> > > > codebases), and be able to be configured to suit those runtime
> > > > environments (for example, POSIX IO may not be available on all
> > > > platforms; we should be able to compile the library to suit).
> > > > The
> > > > licence for the library should also allow this re-use; I'd
> > > > suggest
> > > > a
> > > > dual Apache & GPL licence.
> > > > 
> > > > As for the hardware bindings, we would want to implement a
> > > > serial
> > > > transport binding first, to allow easy prototyping in
> > > > simulation.
> > > > For
> > > > OpenPOWER, we'd want to implement a "raw LPC" binding for
> > > > better
> > > > performance, and later PCIe for large transfers. I imagine that
> > > > there is
> > > > a need for an I2C binding implementation for other hardware
> > > > platforms
> > > > too.
> > > > 
> > > > Lastly, I don't want to exclude any currently-used interfaces
> > > > by
> > > > implementing MCTP - this should be an optional component of
> > > > OpenBMC, and
> > > > not require platforms to implement it.
> > > > 
> > > > ## Alternatives Considered
> > > > 
> > > > There have been two main alternatives to this approach:
> > > > 
> > > > Continue using IPMI, but start making more use of OEM
> > > > extensions to
> > > > suit the requirements of new platforms. However, given that the
> > > > IPMI
> > > > standard is no longer under active development, we would likely
> > > > end
> > > > up
> > > > with a large amount of platform-specific customisations. This
> > > > also
> > > > does
> > > > not solve the hardware channel issues in a standard manner.
> > > > 
> > > > Redfish between host and BMC. This would mean that host
> > > > firmware
> > > > needs a
> > > > HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support
> > > > for
> > > > Redfish schema. This is not feasible for all host firmware
> > > > implementations; certainly not for OpenPOWER. It's possible
> > > > that we
> > > > could run a simplified Redfish stack - indeed, MCTP has a
> > > > proposal
> > > > for a
> > > > Redfish-over-MCTP protocol, which uses simplified serialisation
> > > > and
> > > > no
> > > > requirement on HTTP. However, this still introduces a large
> > > > amount
> > > > of
> > > > complexity in host firmware.
> > > > 
> > > > ## Impacts
> > > > 
> > > > Development would be required to implement the MCTP transport,
> > > > plus
> > > > any
> > > > new users of the MCTP messaging (eg, a PLDM implementation).
> > > > These
> > > > would
> > > > somewhat duplicate the work we have in IPMI handlers.
> > > > 
> > > > We'd want to keep IPMI running in parallel, so the "upgrade"
> > > > path
> > > > should
> > > > be fairly straightforward.
> > > > 
> > > > Design and development needs to involve potential host firmware
> > > > implementations.
> > > > 
> > > > ## Testing
> > > > 
> > > > For the core MCTP library, we are able to run tests there in
> > > > complete
> > > > isolation (I have already been able to run a prototype MCTP
> > > > stack
> > > > through the afl fuzzer) to ensure that the core transport
> > > > protocol
> > > > works.
> > > > 
> > > > For MCTP hardware bindings, we would develop channel-specific
> > > > tests
> > > > that
> > > > would be run in CI on both host and BMC.
> > > > 
> > > > For the OpenBMC MCTP daemon implementation, testing models
> > > > would
> > > > depend
> > > > on the structure we adopt in the design section.
> > > > 
> > > 
> > > Regards,
> > > Deepak
> > > 
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07 18:53     ` Emily Shaffer
  2018-12-07 20:06       ` Supreeth Venkatesh
  2018-12-07 21:19       ` Jeremy Kerr
@ 2018-12-11  1:14       ` Stewart Smith
  2018-12-11 18:26         ` Tanous, Ed
  2 siblings, 1 reply; 30+ messages in thread
From: Stewart Smith @ 2018-12-11  1:14 UTC (permalink / raw)
  To: Emily Shaffer, Supreeth Venkatesh
  Cc: David Thompson, Dong Wei, Naidoo, Nilan, openbmc

Emily Shaffer <emilyshaffer@google.com> writes:
>> > > The reason for a library is to allow the same MCTP implementation
>> > > to be
>> > > used in both OpenBMC and host firmware; the library should be
>> > > bidirectional. To allow this, the library would be written in
>> > > portable C
>> > > (structured in a way that can be compiled as "extern C" in C++
>> > > codebases), and be able to be configured to suit those runtime
>> > > environments (for example, POSIX IO may not be available on all
>> > > platforms; we should be able to compile the library to suit). The
>> > > licence for the library should also allow this re-use; I'd suggest
>> > > a
>> > > dual Apache & GPL licence.
>>
>
> Love the idea of the implementation being bidirectional and able to be
> dropped onto the host side as well, but I must be missing why that requires
> we write it in C.  Are we targeting some platform missing a C++
> cross-compiler implementation? If we're implementing something new from
> scratch, I'd so much prefer to bump up the maintainability/modernity and
> write it in C++ if we can.  Or could be I'm missing some key reason that it
> follows we use C :)

We're certainly missing C++ runtime libraries and support, so that would
be an interesting set of limitations. Even as it is, our C libraries are
pretty limited.

If we were going to bring a not-C language into firmware, I'd prefer we
look at something modern like Rust, that's designed to not have
programmers marching around with guns pointed at their feet.
(I have C++ opinions, most of which can be summarised by NAK :)

-- 
Stewart Smith
OPAL Architect, IBM.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-10 17:40       ` Supreeth Venkatesh
@ 2018-12-11  7:38         ` Deepak Kodihalli
  2018-12-12 22:50           ` Supreeth Venkatesh
  0 siblings, 1 reply; 30+ messages in thread
From: Deepak Kodihalli @ 2018-12-11  7:38 UTC (permalink / raw)
  To: Supreeth Venkatesh, Jeremy Kerr, openbmc
  Cc: Emily Shaffer, David Thompson, Dong Wei, Naidoo, Nilan

On 10/12/18 11:10 PM, Supreeth Venkatesh wrote:
> On Mon, 2018-12-10 at 11:44 +0530, Deepak Kodihalli wrote:
>> On 07/12/18 10:39 PM, Supreeth Venkatesh wrote:
>>> On Fri, 2018-12-07 at 10:43 +0530, Deepak Kodihalli wrote:
>>>> On 07/12/18 8:11 AM, Jeremy Kerr wrote:
>>>>> Hi OpenBMCers!
>>>>>
>>>>> In an earlier thread, I promised to sketch out a design for a
>>>>> MCTP
>>>>> implementation in OpenBMC, and I've included it below.
>>>>
>>>>
>>>> Thanks Jeremy for sending this out. This looks good (have just
>>>> one
>>>> comment below).
>>>>
>>>> Question for everyone : do you have plans to employ PLDM over
>>>> MCTP?
>>>
>>> Yes Deepak. we do eventually.
>>
>>
>> Thanks for letting me know Supreeth!
> My pleasure.
> 
>>
>>>>
>>>> We are interested in PLDM for various "inside the box"
>>>> communications
>>>> (at the moment for the Host <-> BMC communication). I'd like to
>>>> propose
>>>> a design for a PLDM stack on OpenBMC, and would send a design
>>>> template
>>>> for review on the mailing list in some amount of time (I've just
>>>> started
>>>> with some initial sketches). I'd like to also know if others have
>>>> embarked on a similar activity, so that we can collaborate
>>>> earlier
>>>> and
>>>> avoid duplicate work.
>>>
>>> Yes. Interested to collaborate.
>>> Which portion of PLDM are you working on, other than base?
>>> Platform Monitoring and Control?
>>> Firmware Update?
>>> BIOS Control andConfiguration?
>>> SMBIOS Transfer?
>>> FRU Data?
>>> Redfish Device Enablement?
>>>
>>> We are currently interested in Platform Monitoring and Control.
>>
>>
>> We're interested in each of these profiles for the BMC host
>> communications. Are you interested in Platform monitoring and
>> control
>> for communications involving the BMC and the host firmware, or the
>> BMC
>> and other devices?
> BMC and the host firmware initially.
> 
>>
>> Also, I have been thinking about the usefulness/feasibility of a
>> common
>> PLDM library (just the protocol piece - encoding and decoding PLDM
>> messages), so as to be able to share code between BMC and host
>> firmware.
>> This of course sets expectations on the library based on OpenBMC and
>> various host firmware stacks. Do you have an opinion on this?
> Glad that we are on the same page.
> My thinking at this point is to come up with a generic standalone "C"
> library which processes PLDM messages without regard to whether this
> message contains payload for Sensors, firmware update, etc., so that it
> can be ported to Host firmware if needed.

I was thinking of a C lib as well (given the lack of or limited C++ 
stdlib support on some host firmware stacks). Although, when you say a 
lib that processes the PLDM messages, do you mean just the parsing part?

I think the processing/handling of a PLDM message would be platform 
specific, because that involves mapping PLDM concepts to platform 
concepts (for eg to D-Bus on OpenBMC). What I believe can get to the 
common lib is the marshalling and umarshalling of PLDM messages. So for 
eg if the platform has all the necessary information to make a PLDM 
message, it can rely on this lib to actually prepare the message for it. 
Plus the reverse flow - decode an incoming PLDM message into C-style 
data types. We'd have to work on what these APIs look like. Consumers of 
this lib would be the PLDM app(s)/daemon(s).

>>
>>>>
>>>>> This is roughly in the OpenBMC design document format (thanks
>>>>> for
>>>>> the
>>>>> reminder Andrew), but I've sent it to the list for initial
>>>>> review
>>>>> before
>>>>> proposing to gerrit - mainly because there were a lot of folks
>>>>> who
>>>>> expressed interest on the list. I suggest we move to gerrit
>>>>> once we
>>>>> get
>>>>> specific feedback coming in. Let me know if you have general
>>>>> comments
>>>>> whenever you like though.
>>>>>
>>>>> In parallel, I've been developing a prototype for the MCTP
>>>>> library
>>>>> mentioned below, including a serial transport binding. I'll
>>>>> push to
>>>>> github soon and post a link, once I have it in a
>>>>> slightly-more-consumable form.
>>>>>
>>>>> Cheers,
>>>>>
>>>>>
>>>>> Jeremy
>>>>>
>>>>> --------------------------------------------------------
>>>>>
>>>>> # Host/BMC communication channel: MCTP & PLDM
>>>>>
>>>>> Author: Jeremy Kerr <jk@ozlabs.org> <jk>
>>>>>
>>>>> ## Problem Description
>>>>>
>>>>> Currently, we have a few different methods of communication
>>>>> between
>>>>> host
>>>>> and BMC. This is primarily IPMI-based, but also includes a few
>>>>> hardware-specific side-channels, like hiomap. On OpenPOWER
>>>>> hardware
>>>>> at
>>>>> least, we've definitely started to hit some of the limitations
>>>>> of
>>>>> IPMI
>>>>> (for example, we have need for >255 sensors), as well as the
>>>>> hardware
>>>>> channels that IPMI typically uses.
>>>>>
>>>>> This design aims to use the Management Component Transport
>>>>> Protocol
>>>>> (MCTP) to provide a common transport layer over the multiple
>>>>> channels
>>>>> that OpenBMC platforms provide. Then, on top of MCTP, we have
>>>>> the
>>>>> opportunity to move to newer host/BMC messaging protocols to
>>>>> overcome
>>>>> some of the limitations we've encountered with IPMI.
>>>>>
>>>>> ## Background and References
>>>>>
>>>>> Separating the "transport" and "messaging protocol" parts of
>>>>> the
>>>>> current
>>>>> stack allows us to design these parts separately. Currently,
>>>>> IPMI
>>>>> defines both of these; we currently have BT and KCS (both
>>>>> defined
>>>>> as
>>>>> part of the IPMI 2.0 standard) as the transports, and IPMI
>>>>> itself
>>>>> as the
>>>>> messaging protocol.
>>>>>
>>>>> Some efforts of improving the hardware transport mechanism of
>>>>> IPMI
>>>>> have
>>>>> been attempted, but not in a cross-implementation manner so
>>>>> far.
>>>>> This
>>>>> does not address some of the limitations of the IPMI data
>>>>> model.
>>>>>
>>>>> MCTP defines a standard transport protocol, plus a number of
>>>>> separate
>>>>> hardware bindings for the actual transport of MCTP packets.
>>>>> These
>>>>> are
>>>>> defined by the DMTF's Platform Management Working group;
>>>>> standards
>>>>> are
>>>>> available at:
>>>>>
>>>>>      https://www.dmtf.org/standards/pmci
>>>>>
>>>>> I have included a small diagram of how these standards may fit
>>>>> together
>>>>> in an OpenBMC system. The DSP numbers there are references to
>>>>> DMTF
>>>>> standards.
>>>>>
>>>>> One of the key concepts here is that separation of transport
>>>>> protocol
>>>>> from the hardware bindings; this means that an MCTP "stack" may
>>>>> be
>>>>> using
>>>>> either a I2C, PCI, Serial or custom hardware channel, without
>>>>> the
>>>>> higher
>>>>> layers of that stack needing to be aware of the hardware
>>>>> implementation.
>>>>> These higher levels only need to be aware that they are
>>>>> communicating
>>>>> with a certain entity, defined by an Entity ID (MCTP EID).
>>>>>
>>>>> I've mainly focussed on the "transport" part of the design
>>>>> here.
>>>>> While
>>>>> this does enable new messaging protocols (mainly PLDM), I
>>>>> haven't
>>>>> covered that much; we will propose those details for a separate
>>>>> design
>>>>> effort.
>>>>>
>>>>> As part of the design, I have referred to MCTP "messages" and
>>>>> "packets";
>>>>> this is intentional, to match the definitions in the MCTP
>>>>> standard.
>>>>> MCTP
>>>>> messages are the higher-level data transferred between MCTP
>>>>> endpoints,
>>>>> which packets are typically smaller, and are what is sent over
>>>>> the
>>>>> hardware. Messages that are larger than the hardware MTU are
>>>>> split
>>>>> into
>>>>> individual packets by the transmit implementation, and
>>>>> reassembled
>>>>> at
>>>>> the receive implementation.
>>>>>
>>>>> A final important point is that this design is for the host <
>>>>> -->
>>>>> BMC
>>>>> channel *only*. Even if we do replace IPMI for the host
>>>>> interface,
>>>>> we
>>>>> will certainly need an IPMI interface available for external
>>>>> system
>>>>> management.
>>>>>
>>>>> ## Requirements
>>>>>
>>>>> Any channel between host and BMC should:
>>>>>
>>>>>     - Have a simple serialisation and deserialisation format, to
>>>>> enable
>>>>>       implementations in host firmware, which have widely
>>>>> varying
>>>>> runtime
>>>>>       capabilities
>>>>>
>>>>>     - Allow different hardware channels, as we have a wide
>>>>> variety of
>>>>>       target platforms for OpenBMC
>>>>>
>>>>>     - Be usable over simple hardware implementations, but have a
>>>>> facility
>>>>>       for higher bandwidth messaging on platforms that require
>>>>> it.
>>>>>
>>>>>     - Ideally, integrate with newer messaging protocols
>>>>>
>>>>> ## Proposed Design
>>>>>
>>>>> The MCTP core specification just provides the packetisation,
>>>>> routing and
>>>>> addressing mechanisms. The actual transmit/receive of those
>>>>> packets
>>>>> is
>>>>> up to the hardware binding of the MCTP transport.
>>>>>
>>>>> For OpenBMC, we would introduce an MCTP daemon, which
>>>>> implements
>>>>> the
>>>>> transport over a configurable hardware channel (eg., Serial
>>>>> UART,
>>>>> I2C or
>>>>> PCI). This daemon is responsible for the packetisation and
>>>>> routing
>>>>> of
>>>>> MCTP messages to and from host firmware.
>>>>>
>>>>> I see two options for the "inbound" or "application" interface
>>>>> of
>>>>> the
>>>>> MCTP daemon:
>>>>>
>>>>>     - it could handle upper parts of the stack (eg PLDM)
>>>>> directly,
>>>>> through
>>>>>       in-process handlers that register for certain MCTP message
>>>>> types; or
>>>>
>>>> We'd like to somehow ensure (at least via documentation) that the
>>>> handlers don't block the MCTP daemon from processing incoming
>>>> traffic.
>>>> The handlers might anyway end up making IPC calls (via D-Bus) to
>>>> other
>>>> processes. The second approach below seems to alleviate this
>>>> problem.
>>>>
>>>>>     - it could channel raw MCTP messages (reassembled from MCTP
>>>>> packets) to
>>>>>       DBUS messages (similar to the current IPMI host daemons),
>>>>> where
>>>>> the
>>>>>       upper layers receive and act on those DBUS events.
>>>>>
>>>>> I have a preference for the former, but I would be interested
>>>>> to
>>>>> hear
>>>>> from the IPMI folks about how the latter structure has worked
>>>>> in
>>>>> the
>>>>> past.
>>>>>
>>>>> The proposed implementation here is to produce an MCTP
>>>>> "library"
>>>>> which
>>>>> provides the packetisation and routing functions, between:
>>>>>
>>>>>     - an "upper" messaging transmit/receive interface, for tx/rx
>>>>> of a
>>>>> full
>>>>>       message to a specific endpoint
>>>>>
>>>>>     - a "lower" hardware binding for transmit/receive of
>>>>> individual
>>>>>       packets, providing a method for the core to tx/rx each
>>>>> packet
>>>>> to
>>>>>       hardware
>>>>>
>>>>> The lower interface would be plugged in to one of a number of
>>>>> hardware-specific binding implementations (most of which would
>>>>> be
>>>>> included in the library source tree, but others can be plugged-
>>>>> in
>>>>> too)
>>>>>
>>>>> The reason for a library is to allow the same MCTP
>>>>> implementation
>>>>> to be
>>>>> used in both OpenBMC and host firmware; the library should be
>>>>> bidirectional. To allow this, the library would be written in
>>>>> portable C
>>>>> (structured in a way that can be compiled as "extern C" in C++
>>>>> codebases), and be able to be configured to suit those runtime
>>>>> environments (for example, POSIX IO may not be available on all
>>>>> platforms; we should be able to compile the library to suit).
>>>>> The
>>>>> licence for the library should also allow this re-use; I'd
>>>>> suggest
>>>>> a
>>>>> dual Apache & GPL licence.
>>>>>
>>>>> As for the hardware bindings, we would want to implement a
>>>>> serial
>>>>> transport binding first, to allow easy prototyping in
>>>>> simulation.
>>>>> For
>>>>> OpenPOWER, we'd want to implement a "raw LPC" binding for
>>>>> better
>>>>> performance, and later PCIe for large transfers. I imagine that
>>>>> there is
>>>>> a need for an I2C binding implementation for other hardware
>>>>> platforms
>>>>> too.
>>>>>
>>>>> Lastly, I don't want to exclude any currently-used interfaces
>>>>> by
>>>>> implementing MCTP - this should be an optional component of
>>>>> OpenBMC, and
>>>>> not require platforms to implement it.
>>>>>
>>>>> ## Alternatives Considered
>>>>>
>>>>> There have been two main alternatives to this approach:
>>>>>
>>>>> Continue using IPMI, but start making more use of OEM
>>>>> extensions to
>>>>> suit the requirements of new platforms. However, given that the
>>>>> IPMI
>>>>> standard is no longer under active development, we would likely
>>>>> end
>>>>> up
>>>>> with a large amount of platform-specific customisations. This
>>>>> also
>>>>> does
>>>>> not solve the hardware channel issues in a standard manner.
>>>>>
>>>>> Redfish between host and BMC. This would mean that host
>>>>> firmware
>>>>> needs a
>>>>> HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support
>>>>> for
>>>>> Redfish schema. This is not feasible for all host firmware
>>>>> implementations; certainly not for OpenPOWER. It's possible
>>>>> that we
>>>>> could run a simplified Redfish stack - indeed, MCTP has a
>>>>> proposal
>>>>> for a
>>>>> Redfish-over-MCTP protocol, which uses simplified serialisation
>>>>> and
>>>>> no
>>>>> requirement on HTTP. However, this still introduces a large
>>>>> amount
>>>>> of
>>>>> complexity in host firmware.
>>>>>
>>>>> ## Impacts
>>>>>
>>>>> Development would be required to implement the MCTP transport,
>>>>> plus
>>>>> any
>>>>> new users of the MCTP messaging (eg, a PLDM implementation).
>>>>> These
>>>>> would
>>>>> somewhat duplicate the work we have in IPMI handlers.
>>>>>
>>>>> We'd want to keep IPMI running in parallel, so the "upgrade"
>>>>> path
>>>>> should
>>>>> be fairly straightforward.
>>>>>
>>>>> Design and development needs to involve potential host firmware
>>>>> implementations.
>>>>>
>>>>> ## Testing
>>>>>
>>>>> For the core MCTP library, we are able to run tests there in
>>>>> complete
>>>>> isolation (I have already been able to run a prototype MCTP
>>>>> stack
>>>>> through the afl fuzzer) to ensure that the core transport
>>>>> protocol
>>>>> works.
>>>>>
>>>>> For MCTP hardware bindings, we would develop channel-specific
>>>>> tests
>>>>> that
>>>>> would be run in CI on both host and BMC.
>>>>>
>>>>> For the OpenBMC MCTP daemon implementation, testing models
>>>>> would
>>>>> depend
>>>>> on the structure we adopt in the design section.
>>>>>
>>>>
>>>> Regards,
>>>> Deepak
>>>>
>>
>>
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-11  1:14       ` Stewart Smith
@ 2018-12-11 18:26         ` Tanous, Ed
  2018-12-18  0:10           ` Stewart Smith
  0 siblings, 1 reply; 30+ messages in thread
From: Tanous, Ed @ 2018-12-11 18:26 UTC (permalink / raw)
  To: Stewart Smith
  Cc: Emily Shaffer, Supreeth Venkatesh, Dong Wei, Naidoo, Nilan,
	David Thompson, openbmc


> On Dec 10, 2018, at 5:15 PM, Stewart Smith <stewart@linux.ibm.com> wrote:
> 
> If we were going to bring a not-C language into firmware, I'd prefer we
> look at something modern like Rust, that's designed to not have
> programmers marching around with guns pointed at their feet.
> (I have C++ opinions, most of which can be summarised by NAK :)

Rust seems like an interesting language, but when I last evaluated it for BMC use while looking at the redfish implementation, it suffered from a few killer problems that would prevent its use on a current-generation BMC

1. The binaries are really big.  Just an application with the basics (DBus, sockets, io loop) ends up at several megabytes last time I attempted it.  Most of our c++ equivalents consume less than half that.  I know rust as a community has worked on the size issue, and is making progress, but given the size constraints that we’re under, I don’t think 2MB minimum applications are worth the space they take.  For reference, the project is working to make it possible to build an image without python to save ~3MB from the final image.
2. Rust is relatively new.  Finding “seasoned” rust developers, especially ones that have that in combination with embedded skills is difficult.  I don’t believe we have anyone active on the project today that has built rust applications at the scale the OpenBMC project is at these days.  At this point there are a few commercial products that use rust, so at least someone has ripped of that band aid, but I don’t know if any high reliability, embedded products that have taken the plunge yet.  If we want to be the first, we should evaluate what that’s going to cost for the project.
3. The supposed safety in rust is only guaranteed if you never call into a c library or native code.  While there’s pure rust based equivalents for a lot of the things we use, most of our code is stitching together calls between libraries, and northbound interfaces.  There might be one or two examples of where we could use rust without any c libraries, but I’m going to guess there’s only that .  Given that, are we better off having a given application written in one language or two?

In short, I’m interested to see how the rust ecosystem progresses, and I wait with baited breath for something to emerge that’s better than C/C++, but today, I don’t think rust applications in OpenBMC would have my support.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-11  7:38         ` Deepak Kodihalli
@ 2018-12-12 22:50           ` Supreeth Venkatesh
  0 siblings, 0 replies; 30+ messages in thread
From: Supreeth Venkatesh @ 2018-12-12 22:50 UTC (permalink / raw)
  To: Deepak Kodihalli, Jeremy Kerr, openbmc
  Cc: Emily Shaffer, David Thompson, Dong Wei, Naidoo, Nilan

On Tue, 2018-12-11 at 13:08 +0530, Deepak Kodihalli wrote:
> On 10/12/18 11:10 PM, Supreeth Venkatesh wrote:
> > On Mon, 2018-12-10 at 11:44 +0530, Deepak Kodihalli wrote:
> > > On 07/12/18 10:39 PM, Supreeth Venkatesh wrote:
> > > > On Fri, 2018-12-07 at 10:43 +0530, Deepak Kodihalli wrote:
> > > > > On 07/12/18 8:11 AM, Jeremy Kerr wrote:
> > > > > > Hi OpenBMCers!
> > > > > > 
> > > > > > In an earlier thread, I promised to sketch out a design for
> > > > > > a
> > > > > > MCTP
> > > > > > implementation in OpenBMC, and I've included it below.
> > > > > 
> > > > > 
> > > > > Thanks Jeremy for sending this out. This looks good (have
> > > > > just
> > > > > one
> > > > > comment below).
> > > > > 
> > > > > Question for everyone : do you have plans to employ PLDM over
> > > > > MCTP?
> > > > 
> > > > Yes Deepak. we do eventually.
> > > 
> > > 
> > > Thanks for letting me know Supreeth!
> > 
> > My pleasure.
> > 
> > > 
> > > > > 
> > > > > We are interested in PLDM for various "inside the box"
> > > > > communications
> > > > > (at the moment for the Host <-> BMC communication). I'd like
> > > > > to
> > > > > propose
> > > > > a design for a PLDM stack on OpenBMC, and would send a design
> > > > > template
> > > > > for review on the mailing list in some amount of time (I've
> > > > > just
> > > > > started
> > > > > with some initial sketches). I'd like to also know if others
> > > > > have
> > > > > embarked on a similar activity, so that we can collaborate
> > > > > earlier
> > > > > and
> > > > > avoid duplicate work.
> > > > 
> > > > Yes. Interested to collaborate.
> > > > Which portion of PLDM are you working on, other than base?
> > > > Platform Monitoring and Control?
> > > > Firmware Update?
> > > > BIOS Control andConfiguration?
> > > > SMBIOS Transfer?
> > > > FRU Data?
> > > > Redfish Device Enablement?
> > > > 
> > > > We are currently interested in Platform Monitoring and Control.
> > > 
> > > 
> > > We're interested in each of these profiles for the BMC host
> > > communications. Are you interested in Platform monitoring and
> > > control
> > > for communications involving the BMC and the host firmware, or
> > > the
> > > BMC
> > > and other devices?
> > 
> > BMC and the host firmware initially.
> > 
> > > 
> > > Also, I have been thinking about the usefulness/feasibility of a
> > > common
> > > PLDM library (just the protocol piece - encoding and decoding
> > > PLDM
> > > messages), so as to be able to share code between BMC and host
> > > firmware.
> > > This of course sets expectations on the library based on OpenBMC
> > > and
> > > various host firmware stacks. Do you have an opinion on this?
> > 
> > Glad that we are on the same page.
> > My thinking at this point is to come up with a generic standalone
> > "C"
> > library which processes PLDM messages without regard to whether
> > this
> > message contains payload for Sensors, firmware update, etc., so
> > that it
> > can be ported to Host firmware if needed.
> 
> I was thinking of a C lib as well (given the lack of or limited C++ 
> stdlib support on some host firmware stacks). Although, when you say
> a 
> lib that processes the PLDM messages, do you mean just the parsing
> part?
> 
The example you gave below aptly sums up what I had in mind.
 
> I think the processing/handling of a PLDM message would be platform 
> specific, because that involves mapping PLDM concepts to platform 
> concepts (for eg to D-Bus on OpenBMC). What I believe can get to the 
> common lib is the marshalling and umarshalling of PLDM messages. So
> for 
> eg if the platform has all the necessary information to make a PLDM 
> message, it can rely on this lib to actually prepare the message for
> it. 
> Plus the reverse flow - decode an incoming PLDM message into C-style 
> data types. We'd have to work on what these APIs look like. Consumers
> of 
> this lib would be the PLDM app(s)/daemon(s).
Yes Exactly.

> 
> > > 
> > > > > 
> > > > > > This is roughly in the OpenBMC design document format
> > > > > > (thanks
> > > > > > for
> > > > > > the
> > > > > > reminder Andrew), but I've sent it to the list for initial
> > > > > > review
> > > > > > before
> > > > > > proposing to gerrit - mainly because there were a lot of
> > > > > > folks
> > > > > > who
> > > > > > expressed interest on the list. I suggest we move to gerrit
> > > > > > once we
> > > > > > get
> > > > > > specific feedback coming in. Let me know if you have
> > > > > > general
> > > > > > comments
> > > > > > whenever you like though.
> > > > > > 
> > > > > > In parallel, I've been developing a prototype for the MCTP
> > > > > > library
> > > > > > mentioned below, including a serial transport binding. I'll
> > > > > > push to
> > > > > > github soon and post a link, once I have it in a
> > > > > > slightly-more-consumable form.
> > > > > > 
> > > > > > Cheers,
> > > > > > 
> > > > > > 
> > > > > > Jeremy
> > > > > > 
> > > > > > --------------------------------------------------------
> > > > > > 
> > > > > > # Host/BMC communication channel: MCTP & PLDM
> > > > > > 
> > > > > > Author: Jeremy Kerr <jk@ozlabs.org> <jk>
> > > > > > 
> > > > > > ## Problem Description
> > > > > > 
> > > > > > Currently, we have a few different methods of communication
> > > > > > between
> > > > > > host
> > > > > > and BMC. This is primarily IPMI-based, but also includes a
> > > > > > few
> > > > > > hardware-specific side-channels, like hiomap. On OpenPOWER
> > > > > > hardware
> > > > > > at
> > > > > > least, we've definitely started to hit some of the
> > > > > > limitations
> > > > > > of
> > > > > > IPMI
> > > > > > (for example, we have need for >255 sensors), as well as
> > > > > > the
> > > > > > hardware
> > > > > > channels that IPMI typically uses.
> > > > > > 
> > > > > > This design aims to use the Management Component Transport
> > > > > > Protocol
> > > > > > (MCTP) to provide a common transport layer over the
> > > > > > multiple
> > > > > > channels
> > > > > > that OpenBMC platforms provide. Then, on top of MCTP, we
> > > > > > have
> > > > > > the
> > > > > > opportunity to move to newer host/BMC messaging protocols
> > > > > > to
> > > > > > overcome
> > > > > > some of the limitations we've encountered with IPMI.
> > > > > > 
> > > > > > ## Background and References
> > > > > > 
> > > > > > Separating the "transport" and "messaging protocol" parts
> > > > > > of
> > > > > > the
> > > > > > current
> > > > > > stack allows us to design these parts separately.
> > > > > > Currently,
> > > > > > IPMI
> > > > > > defines both of these; we currently have BT and KCS (both
> > > > > > defined
> > > > > > as
> > > > > > part of the IPMI 2.0 standard) as the transports, and IPMI
> > > > > > itself
> > > > > > as the
> > > > > > messaging protocol.
> > > > > > 
> > > > > > Some efforts of improving the hardware transport mechanism
> > > > > > of
> > > > > > IPMI
> > > > > > have
> > > > > > been attempted, but not in a cross-implementation manner so
> > > > > > far.
> > > > > > This
> > > > > > does not address some of the limitations of the IPMI data
> > > > > > model.
> > > > > > 
> > > > > > MCTP defines a standard transport protocol, plus a number
> > > > > > of
> > > > > > separate
> > > > > > hardware bindings for the actual transport of MCTP packets.
> > > > > > These
> > > > > > are
> > > > > > defined by the DMTF's Platform Management Working group;
> > > > > > standards
> > > > > > are
> > > > > > available at:
> > > > > > 
> > > > > >      https://www.dmtf.org/standards/pmci
> > > > > > 
> > > > > > I have included a small diagram of how these standards may
> > > > > > fit
> > > > > > together
> > > > > > in an OpenBMC system. The DSP numbers there are references
> > > > > > to
> > > > > > DMTF
> > > > > > standards.
> > > > > > 
> > > > > > One of the key concepts here is that separation of
> > > > > > transport
> > > > > > protocol
> > > > > > from the hardware bindings; this means that an MCTP "stack"
> > > > > > may
> > > > > > be
> > > > > > using
> > > > > > either a I2C, PCI, Serial or custom hardware channel,
> > > > > > without
> > > > > > the
> > > > > > higher
> > > > > > layers of that stack needing to be aware of the hardware
> > > > > > implementation.
> > > > > > These higher levels only need to be aware that they are
> > > > > > communicating
> > > > > > with a certain entity, defined by an Entity ID (MCTP EID).
> > > > > > 
> > > > > > I've mainly focussed on the "transport" part of the design
> > > > > > here.
> > > > > > While
> > > > > > this does enable new messaging protocols (mainly PLDM), I
> > > > > > haven't
> > > > > > covered that much; we will propose those details for a
> > > > > > separate
> > > > > > design
> > > > > > effort.
> > > > > > 
> > > > > > As part of the design, I have referred to MCTP "messages"
> > > > > > and
> > > > > > "packets";
> > > > > > this is intentional, to match the definitions in the MCTP
> > > > > > standard.
> > > > > > MCTP
> > > > > > messages are the higher-level data transferred between MCTP
> > > > > > endpoints,
> > > > > > which packets are typically smaller, and are what is sent
> > > > > > over
> > > > > > the
> > > > > > hardware. Messages that are larger than the hardware MTU
> > > > > > are
> > > > > > split
> > > > > > into
> > > > > > individual packets by the transmit implementation, and
> > > > > > reassembled
> > > > > > at
> > > > > > the receive implementation.
> > > > > > 
> > > > > > A final important point is that this design is for the host
> > > > > > <
> > > > > > -->
> > > > > > BMC
> > > > > > channel *only*. Even if we do replace IPMI for the host
> > > > > > interface,
> > > > > > we
> > > > > > will certainly need an IPMI interface available for
> > > > > > external
> > > > > > system
> > > > > > management.
> > > > > > 
> > > > > > ## Requirements
> > > > > > 
> > > > > > Any channel between host and BMC should:
> > > > > > 
> > > > > >     - Have a simple serialisation and deserialisation
> > > > > > format, to
> > > > > > enable
> > > > > >       implementations in host firmware, which have widely
> > > > > > varying
> > > > > > runtime
> > > > > >       capabilities
> > > > > > 
> > > > > >     - Allow different hardware channels, as we have a wide
> > > > > > variety of
> > > > > >       target platforms for OpenBMC
> > > > > > 
> > > > > >     - Be usable over simple hardware implementations, but
> > > > > > have a
> > > > > > facility
> > > > > >       for higher bandwidth messaging on platforms that
> > > > > > require
> > > > > > it.
> > > > > > 
> > > > > >     - Ideally, integrate with newer messaging protocols
> > > > > > 
> > > > > > ## Proposed Design
> > > > > > 
> > > > > > The MCTP core specification just provides the
> > > > > > packetisation,
> > > > > > routing and
> > > > > > addressing mechanisms. The actual transmit/receive of those
> > > > > > packets
> > > > > > is
> > > > > > up to the hardware binding of the MCTP transport.
> > > > > > 
> > > > > > For OpenBMC, we would introduce an MCTP daemon, which
> > > > > > implements
> > > > > > the
> > > > > > transport over a configurable hardware channel (eg., Serial
> > > > > > UART,
> > > > > > I2C or
> > > > > > PCI). This daemon is responsible for the packetisation and
> > > > > > routing
> > > > > > of
> > > > > > MCTP messages to and from host firmware.
> > > > > > 
> > > > > > I see two options for the "inbound" or "application"
> > > > > > interface
> > > > > > of
> > > > > > the
> > > > > > MCTP daemon:
> > > > > > 
> > > > > >     - it could handle upper parts of the stack (eg PLDM)
> > > > > > directly,
> > > > > > through
> > > > > >       in-process handlers that register for certain MCTP
> > > > > > message
> > > > > > types; or
> > > > > 
> > > > > We'd like to somehow ensure (at least via documentation) that
> > > > > the
> > > > > handlers don't block the MCTP daemon from processing incoming
> > > > > traffic.
> > > > > The handlers might anyway end up making IPC calls (via D-Bus) 
> > > > > to
> > > > > other
> > > > > processes. The second approach below seems to alleviate this
> > > > > problem.
> > > > > 
> > > > > >     - it could channel raw MCTP messages (reassembled from
> > > > > > MCTP
> > > > > > packets) to
> > > > > >       DBUS messages (similar to the current IPMI host
> > > > > > daemons),
> > > > > > where
> > > > > > the
> > > > > >       upper layers receive and act on those DBUS events.
> > > > > > 
> > > > > > I have a preference for the former, but I would be
> > > > > > interested
> > > > > > to
> > > > > > hear
> > > > > > from the IPMI folks about how the latter structure has
> > > > > > worked
> > > > > > in
> > > > > > the
> > > > > > past.
> > > > > > 
> > > > > > The proposed implementation here is to produce an MCTP
> > > > > > "library"
> > > > > > which
> > > > > > provides the packetisation and routing functions, between:
> > > > > > 
> > > > > >     - an "upper" messaging transmit/receive interface, for
> > > > > > tx/rx
> > > > > > of a
> > > > > > full
> > > > > >       message to a specific endpoint
> > > > > > 
> > > > > >     - a "lower" hardware binding for transmit/receive of
> > > > > > individual
> > > > > >       packets, providing a method for the core to tx/rx
> > > > > > each
> > > > > > packet
> > > > > > to
> > > > > >       hardware
> > > > > > 
> > > > > > The lower interface would be plugged in to one of a number
> > > > > > of
> > > > > > hardware-specific binding implementations (most of which
> > > > > > would
> > > > > > be
> > > > > > included in the library source tree, but others can be
> > > > > > plugged-
> > > > > > in
> > > > > > too)
> > > > > > 
> > > > > > The reason for a library is to allow the same MCTP
> > > > > > implementation
> > > > > > to be
> > > > > > used in both OpenBMC and host firmware; the library should
> > > > > > be
> > > > > > bidirectional. To allow this, the library would be written
> > > > > > in
> > > > > > portable C
> > > > > > (structured in a way that can be compiled as "extern C" in
> > > > > > C++
> > > > > > codebases), and be able to be configured to suit those
> > > > > > runtime
> > > > > > environments (for example, POSIX IO may not be available on
> > > > > > all
> > > > > > platforms; we should be able to compile the library to
> > > > > > suit).
> > > > > > The
> > > > > > licence for the library should also allow this re-use; I'd
> > > > > > suggest
> > > > > > a
> > > > > > dual Apache & GPL licence.
> > > > > > 
> > > > > > As for the hardware bindings, we would want to implement a
> > > > > > serial
> > > > > > transport binding first, to allow easy prototyping in
> > > > > > simulation.
> > > > > > For
> > > > > > OpenPOWER, we'd want to implement a "raw LPC" binding for
> > > > > > better
> > > > > > performance, and later PCIe for large transfers. I imagine
> > > > > > that
> > > > > > there is
> > > > > > a need for an I2C binding implementation for other hardware
> > > > > > platforms
> > > > > > too.
> > > > > > 
> > > > > > Lastly, I don't want to exclude any currently-used
> > > > > > interfaces
> > > > > > by
> > > > > > implementing MCTP - this should be an optional component of
> > > > > > OpenBMC, and
> > > > > > not require platforms to implement it.
> > > > > > 
> > > > > > ## Alternatives Considered
> > > > > > 
> > > > > > There have been two main alternatives to this approach:
> > > > > > 
> > > > > > Continue using IPMI, but start making more use of OEM
> > > > > > extensions to
> > > > > > suit the requirements of new platforms. However, given that
> > > > > > the
> > > > > > IPMI
> > > > > > standard is no longer under active development, we would
> > > > > > likely
> > > > > > end
> > > > > > up
> > > > > > with a large amount of platform-specific customisations.
> > > > > > This
> > > > > > also
> > > > > > does
> > > > > > not solve the hardware channel issues in a standard manner.
> > > > > > 
> > > > > > Redfish between host and BMC. This would mean that host
> > > > > > firmware
> > > > > > needs a
> > > > > > HTTP client, a TCP/IP stack, a JSON (de)serialiser, and
> > > > > > support
> > > > > > for
> > > > > > Redfish schema. This is not feasible for all host firmware
> > > > > > implementations; certainly not for OpenPOWER. It's possible
> > > > > > that we
> > > > > > could run a simplified Redfish stack - indeed, MCTP has a
> > > > > > proposal
> > > > > > for a
> > > > > > Redfish-over-MCTP protocol, which uses simplified
> > > > > > serialisation
> > > > > > and
> > > > > > no
> > > > > > requirement on HTTP. However, this still introduces a large
> > > > > > amount
> > > > > > of
> > > > > > complexity in host firmware.
> > > > > > 
> > > > > > ## Impacts
> > > > > > 
> > > > > > Development would be required to implement the MCTP
> > > > > > transport,
> > > > > > plus
> > > > > > any
> > > > > > new users of the MCTP messaging (eg, a PLDM
> > > > > > implementation).
> > > > > > These
> > > > > > would
> > > > > > somewhat duplicate the work we have in IPMI handlers.
> > > > > > 
> > > > > > We'd want to keep IPMI running in parallel, so the
> > > > > > "upgrade"
> > > > > > path
> > > > > > should
> > > > > > be fairly straightforward.
> > > > > > 
> > > > > > Design and development needs to involve potential host
> > > > > > firmware
> > > > > > implementations.
> > > > > > 
> > > > > > ## Testing
> > > > > > 
> > > > > > For the core MCTP library, we are able to run tests there
> > > > > > in
> > > > > > complete
> > > > > > isolation (I have already been able to run a prototype MCTP
> > > > > > stack
> > > > > > through the afl fuzzer) to ensure that the core transport
> > > > > > protocol
> > > > > > works.
> > > > > > 
> > > > > > For MCTP hardware bindings, we would develop channel-
> > > > > > specific
> > > > > > tests
> > > > > > that
> > > > > > would be run in CI on both host and BMC.
> > > > > > 
> > > > > > For the OpenBMC MCTP daemon implementation, testing models
> > > > > > would
> > > > > > depend
> > > > > > on the structure we adopt in the design section.
> > > > > > 
> > > > > 
> > > > > Regards,
> > > > > Deepak
> > > > > 
> > > 
> > > 
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-11 18:26         ` Tanous, Ed
@ 2018-12-18  0:10           ` Stewart Smith
  0 siblings, 0 replies; 30+ messages in thread
From: Stewart Smith @ 2018-12-18  0:10 UTC (permalink / raw)
  To: Tanous, Ed
  Cc: Emily Shaffer, Supreeth Venkatesh, Dong Wei, Naidoo, Nilan,
	David Thompson, openbmc

"Tanous, Ed" <ed.tanous@intel.com> writes:
>> On Dec 10, 2018, at 5:15 PM, Stewart Smith <stewart@linux.ibm.com> wrote:
>> 
>> If we were going to bring a not-C language into firmware, I'd prefer we
>> look at something modern like Rust, that's designed to not have
>> programmers marching around with guns pointed at their feet.
>> (I have C++ opinions, most of which can be summarised by NAK :)
>
> Rust seems like an interesting language, but when I last evaluated it for BMC use while looking at the redfish implementation, it suffered from a few killer problems that would prevent its use on a current-generation BMC
>
> 1. The binaries are really big.  Just an application with the basics
> (DBus, sockets, io loop) ends up at several megabytes last time I
> attempted it.  Most of our c++ equivalents consume less than half
> that.  I know rust as a community has worked on the size issue, and is
> making progress, but given the size constraints that we’re under, I
> don’t think 2MB minimum applications are worth the space they take.
> For reference, the project is working to make it possible to build an
> image without python to save ~3MB from the final image.

Interesting. This is different than for host firmware as we'd be
building nostdlib and providing our own base language support environment.

> 2. Rust is relatively new.  Finding “seasoned” rust developers,
> especially ones that have that in combination with embedded skills is
> difficult.  I don’t believe we have anyone active on the project today
> that has built rust applications at the scale the OpenBMC project is
> at these days.  At this point there are a few commercial products that
> use rust, so at least someone has ripped of that band aid, but I don’t
> know if any high reliability, embedded products that have taken the
> plunge yet.  If we want to be the first, we should evaluate what
> that’s going to cost for the project.

I'd agree, and that's certainly a concern I share. Firefox is probably
the biggest Rust user out there and one that's been integrating Rust
code into a long living large C++ codebase.

> 3. The supposed safety in rust is only guaranteed if you never call
> into a c library or native code.  While there’s pure rust based
> equivalents for a lot of the things we use, most of our code is
> stitching together calls between libraries, and northbound interfaces.
> There might be one or two examples of where we could use rust without
> any c libraries, but I’m going to guess there’s only that .  Given
> that, are we better off having a given application written in one
> language or two?

For our host firmware, all packages have at least two languages:
assembler, C or C++. Even small amounts of Rust are going to increase
safety of a bunch of these critical components.

> In short, I’m interested to see how the rust ecosystem progresses, and
> I wait with baited breath for something to emerge that’s better than
> C/C++, but today, I don’t think rust applications in OpenBMC would
> have my support.

I think the arguments for/against Rust usage in OpenBMC are pretty
different to those for host firmware - I'm hoping to spend some time
better getting a PoC up for our firmware stack at some point in the
future though.


-- 
Stewart Smith
OPAL Architect, IBM.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2018-12-07  2:41 Initial MCTP design proposal Jeremy Kerr
                   ` (2 preceding siblings ...)
  2018-12-07 16:38 ` Supreeth Venkatesh
@ 2019-02-07 15:51 ` Brad Bishop
  2019-02-08  6:48   ` Jeremy Kerr
  3 siblings, 1 reply; 30+ messages in thread
From: Brad Bishop @ 2019-02-07 15:51 UTC (permalink / raw)
  To: Jeremy Kerr
  Cc: openbmc, Emily Shaffer, David Thompson, Dong Wei,
	Supreeth Venkatesh, Naidoo, Nilan

On Fri, Dec 07, 2018 at 10:41:08AM +0800, Jeremy Kerr wrote:
>Hi OpenBMCers!
>
>In an earlier thread, I promised to sketch out a design for a MCTP
>implementation in OpenBMC, and I've included it below.

Hi Jeremy

Were you planning on submitting this to the docs repository as a design?
Do you need some help with that?

I bring this up because another mctp related design has appeared:

https://gerrit.openbmc-project.xyz/18031

I haven't had a chance to determine how (if at all) these two designs
overlap.  Mostly just wanted to bring to your awareness.

thx - brad

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-02-07 15:51 ` Brad Bishop
@ 2019-02-08  6:48   ` Jeremy Kerr
  2019-02-08 15:55     ` Supreeth Venkatesh
  2019-02-11 18:57     ` Brad Bishop
  0 siblings, 2 replies; 30+ messages in thread
From: Jeremy Kerr @ 2019-02-08  6:48 UTC (permalink / raw)
  To: Brad Bishop
  Cc: openbmc, Emily Shaffer, David Thompson, Dong Wei,
	Supreeth Venkatesh, Naidoo, Nilan

Hi Brad,

> Were you planning on submitting this to the docs repository as a
> design?

Yes! :)

> Do you need some help with that?

Just a nudge into doing so, which your email provided!

I've just pushed to gerrit:

  https://gerrit.openbmc-project.xyz/c/openbmc/docs/+/18089

- this is the doc that I'd originally sent out (top of this thread), and
incorporating suggestions discussed there.

In the meantime, I've mainly been working on the prototype sketches for
the MCTP library, and I've just pushed that too:

  http://github.com/jk-ozlabs/libmctp

It's very prototypey at this stage, and I'm open to input and pull
requests. I'm going to be quite permissive with accepting changes at
this early stage...

If we go ahead with the MCTP designs, and want to use this, I'd propose
moving it from my personal github repo into /openbmc/ (after suitable
updates to get it into OpenBMC standards, of course). Patches welcome!

Cheers,


Jeremy

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: Initial MCTP design proposal
  2019-02-08  6:48   ` Jeremy Kerr
@ 2019-02-08 15:55     ` Supreeth Venkatesh
  2019-02-11 18:57     ` Brad Bishop
  1 sibling, 0 replies; 30+ messages in thread
From: Supreeth Venkatesh @ 2019-02-08 15:55 UTC (permalink / raw)
  To: Jeremy Kerr, Brad Bishop
  Cc: openbmc, Emily Shaffer, David Thompson, Dong Wei, Naidoo, Nilan,
	Deepak Kodihalli

Jeremy,

Thanks for posting the review.
May I suggest requesting MCTP repository in OpenBMC github for effective collaboration similar to PLDM rather than using personal github repo?
This will also avoid duplicating effort and will also encourage participation from multiple contributors.

Thanks,
Supreeth

-----Original Message-----
From: Jeremy Kerr <jk@ozlabs.org>
Sent: Friday, February 8, 2019 12:48 AM
To: Brad Bishop <bradleyb@fuzziesquirrel.com>
Cc: openbmc <openbmc@lists.ozlabs.org>; Emily Shaffer <emilyshaffer@google.com>; David Thompson <dthompson@mellanox.com>; Dong Wei <Dong.Wei@arm.com>; Supreeth Venkatesh <Supreeth.Venkatesh@arm.com>; Naidoo, Nilan <nilan.naidoo@intel.com>
Subject: Re: Initial MCTP design proposal

Hi Brad,

> Were you planning on submitting this to the docs repository as a
> design?

Yes! :)

> Do you need some help with that?

Just a nudge into doing so, which your email provided!

I've just pushed to gerrit:

  https://gerrit.openbmc-project.xyz/c/openbmc/docs/+/18089

- this is the doc that I'd originally sent out (top of this thread), and incorporating suggestions discussed there.

In the meantime, I've mainly been working on the prototype sketches for the MCTP library, and I've just pushed that too:

  http://github.com/jk-ozlabs/libmctp

It's very prototypey at this stage, and I'm open to input and pull requests. I'm going to be quite permissive with accepting changes at this early stage...

If we go ahead with the MCTP designs, and want to use this, I'd propose moving it from my personal github repo into /openbmc/ (after suitable updates to get it into OpenBMC standards, of course). Patches welcome!

Cheers,


Jeremy

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-02-08  6:48   ` Jeremy Kerr
  2019-02-08 15:55     ` Supreeth Venkatesh
@ 2019-02-11 18:57     ` Brad Bishop
  2019-02-12  8:43       ` Jeremy Kerr
  1 sibling, 1 reply; 30+ messages in thread
From: Brad Bishop @ 2019-02-11 18:57 UTC (permalink / raw)
  To: Jeremy Kerr
  Cc: Emily Shaffer, David Thompson, openbmc, Supreeth Venkatesh,
	Naidoo, Nilan, Dong Wei

>If we go ahead with the MCTP designs, and want to use this, I'd propose
>moving it from my personal github repo into /openbmc/ (after suitable
>updates to get it into OpenBMC standards, of course). Patches welcome!

There was some interest in performing a formal code review on Gerrit of
what you have written thus far (or intend to distribute).

To that end I could create an initial empty repository in the openbmc
namespace and you could push your tree there for review, when you are
ready.  How does that sound?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-02-11 18:57     ` Brad Bishop
@ 2019-02-12  8:43       ` Jeremy Kerr
  2019-03-06 20:04         ` Ed Tanous
  0 siblings, 1 reply; 30+ messages in thread
From: Jeremy Kerr @ 2019-02-12  8:43 UTC (permalink / raw)
  To: Brad Bishop
  Cc: Emily Shaffer, David Thompson, openbmc, Supreeth Venkatesh,
	Naidoo, Nilan, Dong Wei

Hi Brad,

> > If we go ahead with the MCTP designs, and want to use this, I'd
> > propose
> > moving it from my personal github repo into /openbmc/ (after
> > suitable
> > updates to get it into OpenBMC standards, of course). Patches
> > welcome!
> 
> There was some interest in performing a formal code review on Gerrit
> of
> what you have written thus far (or intend to distribute).

I'm okay with that, but I expect there to be a lot of churn while we
work out the prototypes, which might make the formal reviews a bit
pointless (if we end up throwing out the code that gets review).

For example, we don't yet have a solid demonstration that the interfaces
are fit for what we need. That'll shake out as part of the prototyping
work that I'm doing now, but there may be changes needed between now and
then.

> To that end I could create an initial empty repository in the openbmc
> namespace and you could push your tree there for review, when you are
> ready.  How does that sound?

Yep, I'd be happy to do this when we decide on a good time to do the
reviews.

Cheers,


Jeremy

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-02-12  8:43       ` Jeremy Kerr
@ 2019-03-06 20:04         ` Ed Tanous
  2019-03-07  8:46           ` Deepak Kodihalli
  2019-03-18 12:12           ` Brad Bishop
  0 siblings, 2 replies; 30+ messages in thread
From: Ed Tanous @ 2019-03-06 20:04 UTC (permalink / raw)
  To: openbmc


On 2/12/19 12:43 AM, Jeremy Kerr wrote:
> Hi Brad,
> 
> 
> For example, we don't yet have a solid demonstration that the interfaces
> are fit for what we need. That'll shake out as part of the prototyping
> work that I'm doing now, but there may be changes needed between now and
> then.
While this statement was true at the time you wrote it, I don't think
that's true now.  I've got the basics of the transport working against a
real NVMe drive target over MCTP.  At least for the simplest case seems
to work well and I'm pretty happy with the result.  I'd like to see the
repo get started just so we have a place for gerrit reviews.

While I agree, there's going to be some churn, I think it's probably
good to have a place for said churn over pushing PRs between our
individual personal github repositories.

> 
>> To that end I could create an initial empty repository in the openbmc
>> namespace and you could push your tree there for review, when you are
>> ready.  How does that sound?
I would like to see this happen.  Once it does, I'll likely push reviews
for the commits in this tree to it (which includes both Jeremys commits
as well as my own.
https://github.com/edtanous/libmctp/commits/pr1


> 
> Yep, I'd be happy to do this when we decide on a good time to do the
> reviews.
I vote the time is now.  Any objections?

-Ed

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-03-06 20:04         ` Ed Tanous
@ 2019-03-07  8:46           ` Deepak Kodihalli
  2019-03-07 19:35             ` Ed Tanous
  2019-03-07 20:40             ` Supreeth Venkatesh
  2019-03-18 12:12           ` Brad Bishop
  1 sibling, 2 replies; 30+ messages in thread
From: Deepak Kodihalli @ 2019-03-07  8:46 UTC (permalink / raw)
  To: Jeremy Kerr, Tanous, Ed; +Cc: OpenBMC Maillist

On 07/03/19 1:34 AM, Ed Tanous wrote:
> 
> On 2/12/19 12:43 AM, Jeremy Kerr wrote:

>>> To that end I could create an initial empty repository in the openbmc
>>> namespace and you could push your tree there for review, when you are
>>> ready.  How does that sound?
> I would like to see this happen.  Once it does, I'll likely push reviews
> for the commits in this tree to it (which includes both Jeremys commits
> as well as my own.
> https://github.com/edtanous/libmctp/commits/pr1
> 
> 
>>
>> Yep, I'd be happy to do this when we decide on a good time to do the
>> reviews.
> I vote the time is now.  Any objections?

I agree to this as well. I've got two PLDM stacks on my laptop 
(requester and responder) talking to each other over libmctp (over the 
serial binding - making use of fifos instead of an actual serial 
connection). I'd like to push a couple small commits to libmctp as well 
(I made a PR for one of them at the moment).

Regards,
Deepak

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-03-07  8:46           ` Deepak Kodihalli
@ 2019-03-07 19:35             ` Ed Tanous
  2019-03-08  4:58               ` Deepak Kodihalli
  2019-03-07 20:40             ` Supreeth Venkatesh
  1 sibling, 1 reply; 30+ messages in thread
From: Ed Tanous @ 2019-03-07 19:35 UTC (permalink / raw)
  To: Deepak Kodihalli, Jeremy Kerr; +Cc: OpenBMC Maillist



On 3/7/19 12:46 AM, Deepak Kodihalli wrote:
> 
> I agree to this as well. I've got two PLDM stacks on my laptop
> (requester and responder) talking to each other over libmctp (over the
> serial binding - making use of fifos instead of an actual serial
> connection). I'd like to push a couple small commits to libmctp as well
> (I made a PR for one of them at the moment).
This is one thing I've wondered before about the way PLDM is being built
out.  Testing the implementation against itself doesn't really test the
correctness of it, unless you're creating a bunch of asserts on data
structure elements, and even then, it just tests that you're giving what
you expect, not that you faithfully implemented the spec.

Do you plan to run this implementation against something other than
itself to test at some point?  If so, what device/implementation are you
planning to target?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-03-07  8:46           ` Deepak Kodihalli
  2019-03-07 19:35             ` Ed Tanous
@ 2019-03-07 20:40             ` Supreeth Venkatesh
  1 sibling, 0 replies; 30+ messages in thread
From: Supreeth Venkatesh @ 2019-03-07 20:40 UTC (permalink / raw)
  To: Deepak Kodihalli, Jeremy Kerr, Tanous, Ed; +Cc: OpenBMC Maillist

On Thu, 2019-03-07 at 14:16 +0530, Deepak Kodihalli wrote:
> On 07/03/19 1:34 AM, Ed Tanous wrote:
> > 
> > On 2/12/19 12:43 AM, Jeremy Kerr wrote:
> > > > To that end I could create an initial empty repository in the
> > > > openbmc
> > > > namespace and you could push your tree there for review, when
> > > > you are
> > > > ready.  How does that sound?
> > 
> > I would like to see this happen.  Once it does, I'll likely push
> > reviews
> > for the commits in this tree to it (which includes both Jeremys
> > commits
> > as well as my own.
> > https://github.com/edtanous/libmctp/commits/pr1
> > 
> > 
> > > 
> > > Yep, I'd be happy to do this when we decide on a good time to do
> > > the
> > > reviews.
> > 
> > I vote the time is now.  Any objections?

+1

> 
> I agree to this as well. I've got two PLDM stacks on my laptop 
> (requester and responder) talking to each other over libmctp (over
> the 
> serial binding - making use of fifos instead of an actual serial 
> connection). I'd like to push a couple small commits to libmctp as
> well 
> (I made a PR for one of them at the moment).
> 
> Regards,
> Deepak
> 

Thanks,
Supreeth

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-03-07 19:35             ` Ed Tanous
@ 2019-03-08  4:58               ` Deepak Kodihalli
  2019-03-08  5:21                 ` Deepak Kodihalli
  0 siblings, 1 reply; 30+ messages in thread
From: Deepak Kodihalli @ 2019-03-08  4:58 UTC (permalink / raw)
  To: Ed Tanous, Jeremy Kerr; +Cc: OpenBMC Maillist

On 08/03/19 1:05 AM, Ed Tanous wrote:
> 
> 
> On 3/7/19 12:46 AM, Deepak Kodihalli wrote:
>>
>> I agree to this as well. I've got two PLDM stacks on my laptop
>> (requester and responder) talking to each other over libmctp (over the
>> serial binding - making use of fifos instead of an actual serial
>> connection). I'd like to push a couple small commits to libmctp as well
>> (I made a PR for one of them at the moment).
> This is one thing I've wondered before about the way PLDM is being built
> out.  Testing the implementation against itself doesn't really test the
> correctness of it, unless you're creating a bunch of asserts on data
> structure elements, and even then, it just tests that you're giving what
> you expect, not that you faithfully implemented the spec.
> 
> Do you plan to run this implementation against something other than
> itself to test at some point?  If so, what device/implementation are you
> planning to target?

BMC to host communications over MCTP, over different physical channels 
(LPC for eg), is one of the first goals. There are more details in the 
PLDM/MCTP design docs that Jeremy and I had sent out.

The test that I described above was to integrate the PLDM stack with 
libmctp and see how that goes, without having to wait for another device 
implementation to be ready. I think this is a logical step towards 
employing PLDM/MCTP for device-device communication. The PLDM daemon 
will link with the MCTP libs, eventually.

Regards,
Deepak

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-03-08  4:58               ` Deepak Kodihalli
@ 2019-03-08  5:21                 ` Deepak Kodihalli
  0 siblings, 0 replies; 30+ messages in thread
From: Deepak Kodihalli @ 2019-03-08  5:21 UTC (permalink / raw)
  To: Ed Tanous, Jeremy Kerr; +Cc: OpenBMC Maillist

On 08/03/19 10:28 AM, Deepak Kodihalli wrote:
> On 08/03/19 1:05 AM, Ed Tanous wrote:
>>
>>
>> On 3/7/19 12:46 AM, Deepak Kodihalli wrote:
>>>
>>> I agree to this as well. I've got two PLDM stacks on my laptop
>>> (requester and responder) talking to each other over libmctp (over the
>>> serial binding - making use of fifos instead of an actual serial
>>> connection). I'd like to push a couple small commits to libmctp as well
>>> (I made a PR for one of them at the moment).
>> This is one thing I've wondered before about the way PLDM is being built
>> out.  Testing the implementation against itself doesn't really test the
>> correctness of it, unless you're creating a bunch of asserts on data
>> structure elements, and even then, it just tests that you're giving what
>> you expect, not that you faithfully implemented the spec.


Also, the application was not tested against itself. The application is 
mostly a responder now. It was tested against a requester 
implementation. This commit chain might help clarify : 
https://gerrit.openbmc-project.xyz/#/c/openbmc/pldm/+/19011/

>> Do you plan to run this implementation against something other than
>> itself to test at some point?  If so, what device/implementation are you
>> planning to target?
> 
> BMC to host communications over MCTP, over different physical channels 
> (LPC for eg), is one of the first goals. There are more details in the 
> PLDM/MCTP design docs that Jeremy and I had sent out.
> 
> The test that I described above was to integrate the PLDM stack with 
> libmctp and see how that goes, without having to wait for another device 
> implementation to be ready. I think this is a logical step towards 
> employing PLDM/MCTP for device-device communication. The PLDM daemon 
> will link with the MCTP libs, eventually.
> 
> Regards,
> Deepak
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Initial MCTP design proposal
  2019-03-06 20:04         ` Ed Tanous
  2019-03-07  8:46           ` Deepak Kodihalli
@ 2019-03-18 12:12           ` Brad Bishop
  1 sibling, 0 replies; 30+ messages in thread
From: Brad Bishop @ 2019-03-18 12:12 UTC (permalink / raw)
  To: Ed Tanous; +Cc: openbmc

>> Yep, I'd be happy to do this when we decide on a good time to do the
>> reviews.
>I vote the time is now.  Any objections?

openbmc/libmctp repository created.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2019-03-18 12:12 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-07  2:41 Initial MCTP design proposal Jeremy Kerr
2018-12-07  4:15 ` Naidoo, Nilan
2018-12-07  5:06   ` Jeremy Kerr
2018-12-07  5:40     ` Naidoo, Nilan
2018-12-07  5:13 ` Deepak Kodihalli
2018-12-07  7:41   ` Jeremy Kerr
2018-12-07 17:09   ` Supreeth Venkatesh
2018-12-07 18:53     ` Emily Shaffer
2018-12-07 20:06       ` Supreeth Venkatesh
2018-12-07 21:19       ` Jeremy Kerr
2018-12-11  1:14       ` Stewart Smith
2018-12-11 18:26         ` Tanous, Ed
2018-12-18  0:10           ` Stewart Smith
2018-12-10  6:14     ` Deepak Kodihalli
2018-12-10 17:40       ` Supreeth Venkatesh
2018-12-11  7:38         ` Deepak Kodihalli
2018-12-12 22:50           ` Supreeth Venkatesh
2018-12-07 16:38 ` Supreeth Venkatesh
2019-02-07 15:51 ` Brad Bishop
2019-02-08  6:48   ` Jeremy Kerr
2019-02-08 15:55     ` Supreeth Venkatesh
2019-02-11 18:57     ` Brad Bishop
2019-02-12  8:43       ` Jeremy Kerr
2019-03-06 20:04         ` Ed Tanous
2019-03-07  8:46           ` Deepak Kodihalli
2019-03-07 19:35             ` Ed Tanous
2019-03-08  4:58               ` Deepak Kodihalli
2019-03-08  5:21                 ` Deepak Kodihalli
2019-03-07 20:40             ` Supreeth Venkatesh
2019-03-18 12:12           ` Brad Bishop

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.