All of lore.kernel.org
 help / color / mirror / Atom feed
* MCTP over PCI on AST2500
@ 2019-11-20  5:26 rgrs
  2019-11-20  6:54 ` Vijay Khemka
  0 siblings, 1 reply; 27+ messages in thread
From: rgrs @ 2019-11-20  5:26 UTC (permalink / raw)
  To: openbmc

[-- Attachment #1: Type: text/plain, Size: 134 bytes --]

Hi,

Does OpenBMC support MCTP over PCI?
As in, drivers that use PCIe VDM data transfers using MCTP controller in AST2500.

Thanks,
rg

[-- Attachment #2: Type: text/html, Size: 467 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2019-11-20  5:26 MCTP over PCI on AST2500 rgrs
@ 2019-11-20  6:54 ` Vijay Khemka
  2019-11-20  6:59   ` Khetan, Sharad
  0 siblings, 1 reply; 27+ messages in thread
From: Vijay Khemka @ 2019-11-20  6:54 UTC (permalink / raw)
  To: rgrs, openbmc

[-- Attachment #1: Type: text/plain, Size: 566 bytes --]

I don’t see any linux kernel driver supporting mctp in upstream. I would love to see these and would like to contribute as well.

From: openbmc <openbmc-bounces+vijaykhemka=fb.com@lists.ozlabs.org> on behalf of rgrs <rgrs@protonmail.com>
Reply-To: rgrs <rgrs@protonmail.com>
Date: Tuesday, November 19, 2019 at 9:42 PM
To: "openbmc@lists.ozlabs.org" <openbmc@lists.ozlabs.org>
Subject: MCTP over PCI on AST2500

Hi,

Does OpenBMC support MCTP over PCI?
As in, drivers that use PCIe VDM data transfers using MCTP controller in AST2500.

Thanks,
rg


[-- Attachment #2: Type: text/html, Size: 3023 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: MCTP over PCI on AST2500
  2019-11-20  6:54 ` Vijay Khemka
@ 2019-11-20  6:59   ` Khetan, Sharad
  2019-11-22  0:38     ` Andrew Jeffery
  0 siblings, 1 reply; 27+ messages in thread
From: Khetan, Sharad @ 2019-11-20  6:59 UTC (permalink / raw)
  To: Vijay Khemka, rgrs, openbmc

[-- Attachment #1: Type: text/plain, Size: 1100 bytes --]

Intel is working on MCTP over PCI (VDM data transfers). We will share details shortly.

Thanks,
-Sharad

From: openbmc <openbmc-bounces+sharad.khetan=intel.com@lists.ozlabs.org> On Behalf Of Vijay Khemka
Sent: Tuesday, November 19, 2019 10:54 PM
To: rgrs <rgrs@protonmail.com>; openbmc@lists.ozlabs.org
Subject: Re: MCTP over PCI on AST2500

I don’t see any linux kernel driver supporting mctp in upstream. I would love to see these and would like to contribute as well.

From: openbmc <openbmc-bounces+vijaykhemka=fb.com@lists.ozlabs.org<mailto:openbmc-bounces+vijaykhemka=fb.com@lists.ozlabs.org>> on behalf of rgrs <rgrs@protonmail.com<mailto:rgrs@protonmail.com>>
Reply-To: rgrs <rgrs@protonmail.com<mailto:rgrs@protonmail.com>>
Date: Tuesday, November 19, 2019 at 9:42 PM
To: "openbmc@lists.ozlabs.org<mailto:openbmc@lists.ozlabs.org>" <openbmc@lists.ozlabs.org<mailto:openbmc@lists.ozlabs.org>>
Subject: MCTP over PCI on AST2500

Hi,

Does OpenBMC support MCTP over PCI?
As in, drivers that use PCIe VDM data transfers using MCTP controller in AST2500.

Thanks,
rg


[-- Attachment #2: Type: text/html, Size: 4465 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2019-11-20  6:59   ` Khetan, Sharad
@ 2019-11-22  0:38     ` Andrew Jeffery
  2019-12-21  0:15       ` Khetan, Sharad
  0 siblings, 1 reply; 27+ messages in thread
From: Andrew Jeffery @ 2019-11-22  0:38 UTC (permalink / raw)
  To: Sharad Khetan, Vijay Khemka, rgrs, openbmc



On Wed, 20 Nov 2019, at 17:29, Khetan, Sharad wrote:
>  
> Intel is working on MCTP over PCI (VDM data transfers). We will share 
> details shortly.
> 

In the kernel? What does the userspace interface look like?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: MCTP over PCI on AST2500
  2019-11-22  0:38     ` Andrew Jeffery
@ 2019-12-21  0:15       ` Khetan, Sharad
  2020-01-09  1:57         ` Andrew Jeffery
  0 siblings, 1 reply; 27+ messages in thread
From: Khetan, Sharad @ 2019-12-21  0:15 UTC (permalink / raw)
  To: Andrew Jeffery, Vijay Khemka, rgrs, openbmc

Hi Andrew,
Sorry for late response.
The plan is to have MCTP in user space. 

Thanks,
-Sharad

-----Original Message-----
From: Andrew Jeffery <andrew@aj.id.au> 
Sent: Thursday, November 21, 2019 4:39 PM
To: Khetan, Sharad <sharad.khetan@intel.com>; Vijay Khemka <vijaykhemka@fb.com>; rgrs <rgrs@protonmail.com>; openbmc@lists.ozlabs.org
Cc: Jeremy Kerr <jk@ozlabs.org>; Deepak Kodihalli <dkodihal@linux.vnet.ibm.com>
Subject: Re: MCTP over PCI on AST2500



On Wed, 20 Nov 2019, at 17:29, Khetan, Sharad wrote:
>  
> Intel is working on MCTP over PCI (VDM data transfers). We will share 
> details shortly.
> 

In the kernel? What does the userspace interface look like?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2019-12-21  0:15       ` Khetan, Sharad
@ 2020-01-09  1:57         ` Andrew Jeffery
  2020-01-09 18:17           ` Vijay Khemka
                             ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Andrew Jeffery @ 2020-01-09  1:57 UTC (permalink / raw)
  To: Sharad Khetan, Vijay Khemka, rgrs, openbmc



On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
> Hi Andrew,
> Sorry for late response.
> The plan is to have MCTP in user space. 
> 

How are you handling this then? mmap()'ing the BAR from sysfs?

I plan to get back to implementing in-kernel socket-based MCTP shortly.
Unfortunately it slipped back a little in my priority list late last year. I'd be
interested in your feedback on the proposal when I get something written
down.

Andrew

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-09  1:57         ` Andrew Jeffery
@ 2020-01-09 18:17           ` Vijay Khemka
  2020-01-09 20:45             ` Richard Hanley
  2020-01-10  0:30           ` Andrew Jeffery
  2020-01-10  3:40           ` Michael Richardson
  2 siblings, 1 reply; 27+ messages in thread
From: Vijay Khemka @ 2020-01-09 18:17 UTC (permalink / raw)
  To: Andrew Jeffery, Sharad Khetan, rgrs, openbmc

This will be much better if implemented in kernel. 

-Vijay
On 1/8/20, 5:55 PM, "Andrew Jeffery" <andrew@aj.id.au> wrote:

    
    
    On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
    > Hi Andrew,
    > Sorry for late response.
    > The plan is to have MCTP in user space. 
    > 
    
    How are you handling this then? mmap()'ing the BAR from sysfs?
    
    I plan to get back to implementing in-kernel socket-based MCTP shortly.
    Unfortunately it slipped back a little in my priority list late last year. I'd be
    interested in your feedback on the proposal when I get something written
    down.
    
    Andrew
    


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-09 18:17           ` Vijay Khemka
@ 2020-01-09 20:45             ` Richard Hanley
  2020-01-10  1:29               ` Andrew Jeffery
  0 siblings, 1 reply; 27+ messages in thread
From: Richard Hanley @ 2020-01-09 20:45 UTC (permalink / raw)
  To: Vijay Khemka; +Cc: Andrew Jeffery, Sharad Khetan, rgrs, openbmc

[-- Attachment #1: Type: text/plain, Size: 1232 bytes --]

I'll add a +1 in interest for MCTP.

Performance would be better if this is moved to the kernel, but I'm a bit
curious about any other pros and cons of working in userspace.

One of our most immediate use cases for MCTP would be in a UEFI BIOS before
a Redfish client can be bootstrapped.  Would things be more portable for
BIOS vendors if this is done in userspace.  I genuinely don't know enough
about that area to know which is more flexible.

-Richard


On Thu, Jan 9, 2020 at 10:18 AM Vijay Khemka <vijaykhemka@fb.com> wrote:

> This will be much better if implemented in kernel.
>
> -Vijay
> On 1/8/20, 5:55 PM, "Andrew Jeffery" <andrew@aj.id.au> wrote:
>
>
>
>     On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
>     > Hi Andrew,
>     > Sorry for late response.
>     > The plan is to have MCTP in user space.
>     >
>
>     How are you handling this then? mmap()'ing the BAR from sysfs?
>
>     I plan to get back to implementing in-kernel socket-based MCTP shortly.
>     Unfortunately it slipped back a little in my priority list late last
> year. I'd be
>     interested in your feedback on the proposal when I get something
> written
>     down.
>
>     Andrew
>
>
>

[-- Attachment #2: Type: text/html, Size: 1805 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-09  1:57         ` Andrew Jeffery
  2020-01-09 18:17           ` Vijay Khemka
@ 2020-01-10  0:30           ` Andrew Jeffery
  2020-01-13 16:53             ` Khetan, Sharad
  2020-01-10  3:40           ` Michael Richardson
  2 siblings, 1 reply; 27+ messages in thread
From: Andrew Jeffery @ 2020-01-10  0:30 UTC (permalink / raw)
  To: Sharad Khetan, Vijay Khemka, rgrs, openbmc



On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:
> 
> 
> On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
> > Hi Andrew,
> > Sorry for late response.
> > The plan is to have MCTP in user space. 
> > 
> 
> How are you handling this then? mmap()'ing the BAR from sysfs?

Sorry, let me put my brain back in, I was thinking of the wrong side
of the  BMC/Host MCTP channel. How much were you planning to
do in userspace on the BMC? As in, are you planning to drive the BMC's
PCIe MCTP controller from userspace (presumably via /dev/mem)?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-09 20:45             ` Richard Hanley
@ 2020-01-10  1:29               ` Andrew Jeffery
  0 siblings, 0 replies; 27+ messages in thread
From: Andrew Jeffery @ 2020-01-10  1:29 UTC (permalink / raw)
  To: Richard Hanley, Vijay Khemka; +Cc: Sharad Khetan, rgrs, openbmc

Hi Richard,

On Fri, 10 Jan 2020, at 07:15, Richard Hanley wrote:
> I'll add a +1 in interest for MCTP.
> 
> Performance would be better if this is moved to the kernel, but I'm a 
> bit curious about any other pros and cons of working in userspace.
> 
> One of our most immediate use cases for MCTP would be in a UEFI BIOS 
> before a Redfish client can be bootstrapped. Would things be more 
> portable for BIOS vendors if this is done in userspace. I genuinely 
> don't know enough about that area to know which is more flexible.

As MCTP is just a transport it has a fairly well-contained set of behaviours
(by contrast, see PLDM). The idea of implementing MCTP in the kernel isn't
really about performance so much as providing a consistent, binding-
independent interface to userspace. The advantage here is that as the
bindings would also be implemented in-kernel we avoid creating bespoke
interfaces to plumb binding-specific behaviours out to userspace just to
hook into e.g. libmctp. This should lead to less friction getting patches
adding support for new bindings merged upstream (at the cost of getting
an MCTP subsystem into the kernel).

The proposal is to add a new socket address family, AF_MCTP. A number of
MCTP concepts map fairly neatly onto existing networking concepts - it's a
packet-switched network routing data between components inside the
platform over heterogeneous bus types. The approach is somewhat inspired
by AF_CAN for CAN bus. libmctp already prepares its consumers for a socket-
based interface with the demux daemon, so existing consumers would only
need a small change to switch to the kernel-based socket interface.

I've written a little more about it all in the past:

https://lists.ozlabs.org/pipermail/openbmc/2019-May/016460.html

Hope that helps!

Andrew

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-09  1:57         ` Andrew Jeffery
  2020-01-09 18:17           ` Vijay Khemka
  2020-01-10  0:30           ` Andrew Jeffery
@ 2020-01-10  3:40           ` Michael Richardson
  2020-01-10  5:05             ` Andrew Jeffery
  2 siblings, 1 reply; 27+ messages in thread
From: Michael Richardson @ 2020-01-10  3:40 UTC (permalink / raw)
  To: Andrew Jeffery; +Cc: Sharad Khetan, Vijay Khemka, rgrs, openbmc

[-- Attachment #1: Type: text/plain, Size: 1511 bytes --]


Andrew Jeffery <andrew@aj.id.au> wrote:
    > On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
    >> Hi Andrew,
    >> Sorry for late response.
    >> The plan is to have MCTP in user space.
    >>

    > How are you handling this then? mmap()'ing the BAR from sysfs?

    > I plan to get back to implementing in-kernel socket-based MCTP shortly.
    > Unfortunately it slipped back a little in my priority list late last year. I'd be
    > interested in your feedback on the proposal when I get something written
    > down.

I have read through a few MCTP documents on dtmf.org, but they either dealt
with too highlevel (SMBIOS tables), or too low-level (MCTP over UART).

Is there something that I can read that explains the underlying PCI
relationships between the BMC and the host CPU's PCI/bridges?
Maybe I just need to read the AST2500 datasheet?

(I was at one point quite knowledgeable about PCI, having designed adapter
cards with multiple targets and dealt with swizzling, and BARs, etc.)

What I heard is that for typical AST2500 based BMCs, the host CPU can map the
entire address space of the AST2500, and this rather concerns me.
I had rather expected some kind of mailbox system in a specialized ram that
both systems could use to exchange data.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-10  3:40           ` Michael Richardson
@ 2020-01-10  5:05             ` Andrew Jeffery
  2020-01-10 15:38               ` Michael Richardson
  0 siblings, 1 reply; 27+ messages in thread
From: Andrew Jeffery @ 2020-01-10  5:05 UTC (permalink / raw)
  To: Michael Richardson; +Cc: Sharad Khetan, Vijay Khemka, rgrs, openbmc



On Fri, 10 Jan 2020, at 14:10, Michael Richardson wrote:
> 
> Andrew Jeffery <andrew@aj.id.au> wrote:
>     > On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
>     >> Hi Andrew,
>     >> Sorry for late response.
>     >> The plan is to have MCTP in user space.
>     >>
> 
>     > How are you handling this then? mmap()'ing the BAR from sysfs?
> 
>     > I plan to get back to implementing in-kernel socket-based MCTP 
> shortly.
>     > Unfortunately it slipped back a little in my priority list late 
> last year. I'd be
>     > interested in your feedback on the proposal when I get something 
> written
>     > down.
> 
> I have read through a few MCTP documents on dtmf.org, but they either dealt
> with too highlevel (SMBIOS tables), or too low-level (MCTP over UART).
> 
> Is there something that I can read that explains the underlying PCI
> relationships between the BMC and the host CPU's PCI/bridges?
> Maybe I just need to read the AST2500 datasheet?

Beware that I brainfarted in my reply above, so before I go further:

https://lists.ozlabs.org/pipermail/openbmc/2020-January/020141.html

But to answer your questions, you should read the MCTP Base Specification
(DSP0236)[1] and MCTP PCIe VDM Transport Binding Specification (DSP0238)[2]
and reference the MCTP Controller section of the ASPEED datasheets.

[1] https://www.dmtf.org/sites/default/files/standards/documents/DSP0236_1.3.0.pdf
[2] https://www.dmtf.org/sites/default/files/standards/documents/DSP0238_1.1.0.pdf

> 
> (I was at one point quite knowledgeable about PCI, having designed adapter
> cards with multiple targets and dealt with swizzling, and BARs, etc.)
> 
> What I heard is that for typical AST2500 based BMCs, the host CPU can map the
> entire address space of the AST2500, and this rather concerns me.

Yes, this is indeed concerning. It has its own CVE:

https://nvd.nist.gov/vuln/detail/CVE-2019-6260

OpenBMC provides mitigations through the `phosphor-isolation` distro feature.
The feature enables this u-boot patch that disables all of the backdoors early in
u-boot:

https://github.com/openbmc/meta-phosphor/blob/master/aspeed-layer/recipes-bsp/u-boot/files/0001-aspeed-Disable-unnecessary-features.patch

The distro feature is opt-in as it has impacts beyond simply disabling the backdoors
(there are some unfortunate side-effects to enforcing confidentiality of the BMC's
address space.

> I had rather expected some kind of mailbox system in a specialized ram that
> both systems could use to exchange data.

Well, a few of us at IBM have cooked up an LPC binding that is not yet standardised
but does exactly this. We use a KCS device to send byte-sized control commands
and interrupts between the host and the BMC, and use a reserved memory region
mapped to the LPC firmware space to transfer message data. I don't think we've
published the spec yet, but I can put the work in to get it onto the list.

Hope that helps,

Andrew

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-10  5:05             ` Andrew Jeffery
@ 2020-01-10 15:38               ` Michael Richardson
  2020-01-12 23:38                 ` Andrew Jeffery
  0 siblings, 1 reply; 27+ messages in thread
From: Michael Richardson @ 2020-01-10 15:38 UTC (permalink / raw)
  To: Andrew Jeffery; +Cc: Sharad Khetan, Vijay Khemka, rgrs, openbmc

[-- Attachment #1: Type: text/plain, Size: 3158 bytes --]


Andrew Jeffery <andrew@aj.id.au> wrote:
    >> I have read through a few MCTP documents on dtmf.org, but they either dealt
    >> with too highlevel (SMBIOS tables), or too low-level (MCTP over UART).
    >>
    >> Is there something that I can read that explains the underlying PCI
    >> relationships between the BMC and the host CPU's PCI/bridges?
    >> Maybe I just need to read the AST2500 datasheet?

    > Beware that I brainfarted in my reply above, so before I go further:

    > https://lists.ozlabs.org/pipermail/openbmc/2020-January/020141.html

yes, I got that part :-)

    > But to answer your questions, you should read the MCTP Base Specification
    > (DSP0236)[1] and MCTP PCIe VDM Transport Binding Specification (DSP0238)[2]
    > and reference the MCTP Controller section of the ASPEED datasheets.

    > [1] https://www.dmtf.org/sites/default/files/standards/documents/DSP0236_1.3.0.pdf
    > [2] https://www.dmtf.org/sites/default/files/standards/documents/DSP0238_1.1.0.pdf

Thank you, this is what I was looking for.

    >> (I was at one point quite knowledgeable about PCI, having designed adapter
    >> cards with multiple targets and dealt with swizzling, and BARs, etc.)
    >>
    >> What I heard is that for typical AST2500 based BMCs, the host CPU can map the
    >> entire address space of the AST2500, and this rather concerns me.

    > Yes, this is indeed concerning. It has its own CVE:

    > https://nvd.nist.gov/vuln/detail/CVE-2019-6260

I was concerned that it really was this bad.

    > OpenBMC provides mitigations through the `phosphor-isolation` distro feature.
    > The feature enables this u-boot patch that disables all of the backdoors early in
    > u-boot:

    > https://github.com/openbmc/meta-phosphor/blob/master/aspeed-layer/recipes-bsp/u-boot/files/0001-aspeed-Disable-unnecessary-features.patch

    > The distro feature is opt-in as it has impacts beyond simply disabling the backdoors
    > (there are some unfortunate side-effects to enforcing confidentiality of the BMC's
    > address space.

okay, so the bridge gets turned off, and it has some other effects.
What are the side effects?  I'm guessing by the inclusion of the VGA defines
in that board init that they are video related.

I can see that doing this in uboot is the earliest possible; but in most
cases the main CPU has no power until the BMC boots, so it can't attack until
the BMC is running.  Are there some situations in which the BMC (or the P2A
bridge) could get reset without the host CPU also being reset?

    >> I had rather expected some kind of mailbox system in a specialized ram that
    >> both systems could use to exchange data.

    > Well, a few of us at IBM have cooked up an LPC binding that is not yet standardised
    > but does exactly this. We use a KCS device to send byte-sized control commands
    > and interrupts between the host and the BMC, and use a reserved memory region
    > mapped to the LPC firmware space to transfer message data. I don't think we've
    > published the spec yet, but I can put the work in to get it onto the list.

That's cool, I'm glad that you've gone this way.


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-10 15:38               ` Michael Richardson
@ 2020-01-12 23:38                 ` Andrew Jeffery
  2020-01-13 17:09                   ` Michael Richardson
  0 siblings, 1 reply; 27+ messages in thread
From: Andrew Jeffery @ 2020-01-12 23:38 UTC (permalink / raw)
  To: Michael Richardson; +Cc: Sharad Khetan, Vijay Khemka, rgrs, openbmc



On Sat, 11 Jan 2020, at 02:08, Michael Richardson wrote:
> 
> Andrew Jeffery <andrew@aj.id.au> wrote: 
>     > 
> https://github.com/openbmc/meta-phosphor/blob/master/aspeed-layer/recipes-bsp/u-boot/files/0001-aspeed-Disable-unnecessary-features.patch
> 
>     > The distro feature is opt-in as it has impacts beyond simply 
> disabling the backdoors
>     > (there are some unfortunate side-effects to enforcing 
> confidentiality of the BMC's
>     > address space.
> 
> okay, so the bridge gets turned off, and it has some other effects.
> What are the side effects?  I'm guessing by the inclusion of the VGA defines
> in that board init that they are video related.

We have a slightly more detailed description here:

https://github.com/openbmc/openbmc/issues/3475

With respect to PCIe, disabling the P2A causes the host kernel to fail probing
the  AST DRM driver on kernels before 4.12 (from memory). This impacts
POWER more than other host architectures due to invalid accesses triggering
EEH.

With respect to LPC, the issue is largely that the bit in the LPC controller to
disable the iLPC2AHB bridge only disables write access, the host can still
continue to issues arbitrary reads of the BMC address space. To prevent
arbitrary reads the BMC must disable the entire SuperIO controller, which
knocks out the ability to configure UARTs, GPIOs, and the LPC mailbox
among other functionality. On some platforms disabling SuperIO is feasible
(POWER based), but others may require some of this functionality be
present.

> 
> I can see that doing this in uboot is the earliest possible;

It's actually possible to disable the backdoors before the first instruction is
run on the ARM core with the firmware strapping feature, but it's likely the
result becomes platform-specific and integrating the configuration
into the flash image can be a bit fiddly (you could implement it with  a
custom u-boot linker script).

> but in most
> cases the main CPU has no power until the BMC boots, so it can't attack until
> the BMC is running.  Are there some situations in which the BMC (or the P2A
> bridge) could get reset without the host CPU also being reset?

See the discussion of the watchdog reset modes in the link above.

Cheers,

Andrew

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: MCTP over PCI on AST2500
  2020-01-10  0:30           ` Andrew Jeffery
@ 2020-01-13 16:53             ` Khetan, Sharad
  2020-01-13 18:54               ` Deepak Kodihalli
  2020-01-13 23:22               ` Andrew Jeffery
  0 siblings, 2 replies; 27+ messages in thread
From: Khetan, Sharad @ 2020-01-13 16:53 UTC (permalink / raw)
  To: Andrew Jeffery, Vijay Khemka, rgrs, openbmc
  Cc: Jeremy Kerr, Deepak Kodihalli, Winiarska, Iwona, Bhat, Sumanth

Hi Andrew,

On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:
> 
> 
> On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
> > Hi Andrew,
> > Sorry for late response.
> > The plan is to have MCTP in user space. 
> > 
> 
> How are you handling this then? mmap()'ing the BAR from sysfs?

>Sorry, let me put my brain back in, I was thinking of the wrong side of the  BMC/Host MCTP channel. How much were you planning to do in userspace on the BMC? As in, are you planning to drive the BMC's PCIe MCTP controller from userspace (presumably via /dev/mem)?

 
For implementation on the BMC, we agree that it's better to do it in kernel (and as you mentioned  - use consistent interface to upper layers, provide another transport). However, given the time needed to implement things in kernel (and the review after), we are starting with a short term solution. We will be implementing MCTP (protocol elements) in user space, along with a low level MCTP PCIe driver just to push bits on PCIe. Iwona is working on this and should be able to describe the exact primitive.

Thanks,
-Sharad

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-12 23:38                 ` Andrew Jeffery
@ 2020-01-13 17:09                   ` Michael Richardson
  0 siblings, 0 replies; 27+ messages in thread
From: Michael Richardson @ 2020-01-13 17:09 UTC (permalink / raw)
  To: Andrew Jeffery; +Cc: Sharad Khetan, Vijay Khemka, rgrs, openbmc

[-- Attachment #1: Type: text/plain, Size: 2512 bytes --]


Andrew Jeffery <andrew@aj.id.au> wrote:
    > On Sat, 11 Jan 2020, at 02:08, Michael Richardson wrote:
    >> 
    >> Andrew Jeffery <andrew@aj.id.au> wrote:
    >> > 
    >> https://github.com/openbmc/meta-phosphor/blob/master/aspeed-layer/recipes-bsp/u-boot/files/0001-aspeed-Disable-unnecessary-features.patch
    >> 
    >> > The distro feature is opt-in as it has impacts beyond simply
    >> disabling the backdoors > (there are some unfortunate side-effects to
    >> enforcing confidentiality of the BMC's > address space.
    >> 
    >> okay, so the bridge gets turned off, and it has some other effects.
    >> What are the side effects?  I'm guessing by the inclusion of the VGA
    >> defines in that board init that they are video related.

    > We have a slightly more detailed description here:

    > https://github.com/openbmc/openbmc/issues/3475

    > With respect to PCIe, disabling the P2A causes the host kernel to fail
    > probing the AST DRM driver on kernels before 4.12 (from memory). This
    > impacts POWER more than other host architectures due to invalid
    > accesses triggering EEH.

Thanks, that description was very useful... very good job here.

    > With respect to LPC, the issue is largely that the bit in the LPC
    > controller to disable the iLPC2AHB bridge only disables write access,
    > the host can still continue to issues arbitrary reads of the BMC
    > address space.

That's an interesting challenge.
If it can read, then it can read crypto-secrets (private keys and session keys).
Does the AST have any internal places which aren't visible externally?  
I can see how this feature was really useful to debugging BMC code :-)
I wish I could answer this question myself, but I haven't found a public spec
for the AST yet.  Is the NDA process difficult, I wonder.

    > To prevent arbitrary reads the BMC must disable the
    > entire SuperIO controller, which knocks out the ability to configure
    > UARTs, GPIOs, and the LPC mailbox among other functionality. On some
    > platforms disabling SuperIO is feasible (POWER based), but others may
    > require some of this functionality be present.

Yes, I can see that seems like crippling functionality.

-- 
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-13 16:53             ` Khetan, Sharad
@ 2020-01-13 18:54               ` Deepak Kodihalli
  2020-01-14  5:54                 ` Khetan, Sharad
  2020-01-13 23:22               ` Andrew Jeffery
  1 sibling, 1 reply; 27+ messages in thread
From: Deepak Kodihalli @ 2020-01-13 18:54 UTC (permalink / raw)
  To: Khetan, Sharad, Andrew Jeffery, Vijay Khemka, rgrs, openbmc
  Cc: Jeremy Kerr, Winiarska, Iwona, Bhat, Sumanth

On 13/01/20 10:23 PM, Khetan, Sharad wrote:
> Hi Andrew,
> 
> On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:
>>
>>
>> On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
>>> Hi Andrew,
>>> Sorry for late response.
>>> The plan is to have MCTP in user space.
>>>
>>
>> How are you handling this then? mmap()'ing the BAR from sysfs?
> 
>> Sorry, let me put my brain back in, I was thinking of the wrong side of the  BMC/Host MCTP channel. How much were you planning to do in userspace on the BMC? As in, are you planning to drive the BMC's PCIe MCTP controller from userspace (presumably via /dev/mem)?
> 
>   
> For implementation on the BMC, we agree that it's better to do it in kernel (and as you mentioned  - use consistent interface to upper layers, provide another transport). However, given the time needed to implement things in kernel (and the review after), we are starting with a short term solution. We will be implementing MCTP (protocol elements) in user space, along with a low level MCTP PCIe driver just to push bits on PCIe. Iwona is working on this and should be able to describe the exact primitive.

Do you plan to do the user-space work as an extension to/reusing 
components from openbmc/libmctp?

Thanks,
Deepak

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-13 16:53             ` Khetan, Sharad
  2020-01-13 18:54               ` Deepak Kodihalli
@ 2020-01-13 23:22               ` Andrew Jeffery
  1 sibling, 0 replies; 27+ messages in thread
From: Andrew Jeffery @ 2020-01-13 23:22 UTC (permalink / raw)
  To: Sharad Khetan, Vijay Khemka, rgrs, openbmc
  Cc: Jeremy Kerr, Deepak Kodihalli, Winiarska, Iwona, Bhat, Sumanth



On Tue, 14 Jan 2020, at 03:23, Khetan, Sharad wrote:
> Hi Andrew,
> 
> On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:
> > 
> > 
> > On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
> > > Hi Andrew,
> > > Sorry for late response.
> > > The plan is to have MCTP in user space. 
> > > 
> > 
> > How are you handling this then? mmap()'ing the BAR from sysfs?
> 
> >Sorry, let me put my brain back in, I was thinking of the wrong side of the  BMC/Host MCTP channel. How much were you planning to do in userspace on the BMC? As in, are you planning to drive the BMC's PCIe MCTP controller from userspace (presumably via /dev/mem)?
> 
>  
> For implementation on the BMC, we agree that it's better to do it in 
> kernel (and as you mentioned  - use consistent interface to upper 
> layers, provide another transport). However, given the time needed to 
> implement things in kernel (and the review after), we are starting with 
> a short term solution. We will be implementing MCTP (protocol elements) 
> in user space, along with a low level MCTP PCIe driver just to push 
> bits on PCIe. Iwona is working on this and should be able to describe 
> the exact primitive.

Alright, great, I'll keep you posted on the kernel-side progress.

Andrew

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: MCTP over PCI on AST2500
  2020-01-13 18:54               ` Deepak Kodihalli
@ 2020-01-14  5:54                 ` Khetan, Sharad
  2020-01-14  6:20                   ` Jeremy Kerr
  2020-01-14  8:54                   ` rgrs
  0 siblings, 2 replies; 27+ messages in thread
From: Khetan, Sharad @ 2020-01-14  5:54 UTC (permalink / raw)
  To: Deepak Kodihalli, Andrew Jeffery, Vijay Khemka, rgrs, openbmc
  Cc: Jeremy Kerr, Winiarska, Iwona, Bhat, Sumanth

Hi Deepak,

On 13/01/20 10:23 PM, Khetan, Sharad wrote:
> Hi Andrew,
> 
> On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:
>>
>>
>> On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
>>> Hi Andrew,
>>> Sorry for late response.
>>> The plan is to have MCTP in user space.
>>>
>>
>> How are you handling this then? mmap()'ing the BAR from sysfs?
> 
>> Sorry, let me put my brain back in, I was thinking of the wrong side of the  BMC/Host MCTP channel. How much were you planning to do in userspace on the BMC? As in, are you planning to drive the BMC's PCIe MCTP controller from userspace (presumably via /dev/mem)?
> 
>   
> For implementation on the BMC, we agree that it's better to do it in kernel (and as you mentioned  - use consistent interface to upper layers, provide another transport). However, given the time needed to implement things in kernel (and the review after), we are starting with a short term solution. We will be implementing MCTP (protocol elements) in user space, along with a low level MCTP PCIe driver just to push bits on PCIe. Iwona is working on this and should be able to describe the exact primitive.

Do you plan to do the user-space work as an extension to/reusing components from openbmc/libmctp?

Thanks,
Deepak

Yes we plan to reuse and extend the libmctp, support PCIe as well as SMBus bindings. We plan to use d-bus extensions to existing libmctp. That said, we will know the exact extent of reuse/modifications when we really start implementing.

We are implementing this for AST 2600 (will not support any workarounds for AST 2500 bug). 

@Andrew, Thanks for your response.

Thanks,
Sharad
 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: MCTP over PCI on AST2500
  2020-01-14  5:54                 ` Khetan, Sharad
@ 2020-01-14  6:20                   ` Jeremy Kerr
  2020-01-14  6:39                     ` Khetan, Sharad
  2020-01-14 17:45                     ` Patrick Williams
  2020-01-14  8:54                   ` rgrs
  1 sibling, 2 replies; 27+ messages in thread
From: Jeremy Kerr @ 2020-01-14  6:20 UTC (permalink / raw)
  To: Khetan, Sharad, Deepak Kodihalli, Andrew Jeffery, Vijay Khemka,
	rgrs, openbmc
  Cc: Winiarska, Iwona, Bhat, Sumanth

[-- Attachment #1: Type: text/plain, Size: 2318 bytes --]

Hi Ketan,

Just a suggestion - you probably don't want to be passing MCTP messages over dbus - this is something we learnt from the IPMI implementation.

The current design of the mctp-demux-daemon (included in the libmctp codebase) is intended to provide an interface that will be easy to migrate to a future kernel implementation (ie., using sockets to pass MCTP messages), and allows multiple applications to be listening for MCTP messages of different types.

Regards,


Jeremy

On 14 January 2020 1:54:49 pm AWST, "Khetan, Sharad" <sharad.khetan@intel.com> wrote:
>Hi Deepak,
>
>On 13/01/20 10:23 PM, Khetan, Sharad wrote:
>> Hi Andrew,
>> 
>> On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:
>>>
>>>
>>> On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
>>>> Hi Andrew,
>>>> Sorry for late response.
>>>> The plan is to have MCTP in user space.
>>>>
>>>
>>> How are you handling this then? mmap()'ing the BAR from sysfs?
>> 
>>> Sorry, let me put my brain back in, I was thinking of the wrong side
>of the  BMC/Host MCTP channel. How much were you planning to do in
>userspace on the BMC? As in, are you planning to drive the BMC's PCIe
>MCTP controller from userspace (presumably via /dev/mem)?
>> 
>>   
>> For implementation on the BMC, we agree that it's better to do it in
>kernel (and as you mentioned  - use consistent interface to upper
>layers, provide another transport). However, given the time needed to
>implement things in kernel (and the review after), we are starting with
>a short term solution. We will be implementing MCTP (protocol elements)
>in user space, along with a low level MCTP PCIe driver just to push
>bits on PCIe. Iwona is working on this and should be able to describe
>the exact primitive.
>
>Do you plan to do the user-space work as an extension to/reusing
>components from openbmc/libmctp?
>
>Thanks,
>Deepak
>
>Yes we plan to reuse and extend the libmctp, support PCIe as well as
>SMBus bindings. We plan to use d-bus extensions to existing libmctp.
>That said, we will know the exact extent of reuse/modifications when we
>really start implementing.
>
>We are implementing this for AST 2600 (will not support any workarounds
>for AST 2500 bug). 
>
>@Andrew, Thanks for your response.
>
>Thanks,
>Sharad
> 

[-- Attachment #2: Type: text/html, Size: 3074 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-14  6:20                   ` Jeremy Kerr
@ 2020-01-14  6:39                     ` Khetan, Sharad
  2020-01-14  8:10                       ` Deepak Kodihalli
  2020-01-14 15:54                       ` Thomaiyar, Richard Marian
  2020-01-14 17:45                     ` Patrick Williams
  1 sibling, 2 replies; 27+ messages in thread
From: Khetan, Sharad @ 2020-01-14  6:39 UTC (permalink / raw)
  To: Jeremy Kerr
  Cc: Deepak Kodihalli, Andrew Jeffery, Vijay Khemka, rgrs, openbmc,
	Winiarska, Iwona, Bhat, Sumanth

[-- Attachment #1: Type: text/plain, Size: 2418 bytes --]

Thanks for the pointer Jeremy. We will look into demux daemon.
Thanks,
-Sharad

On Jan 13, 2020, at 10:21 PM, Jeremy Kerr <jk@ozlabs.org<mailto:jk@ozlabs.org>> wrote:

Hi Ketan,

Just a suggestion - you probably don't want to be passing MCTP messages over dbus - this is something we learnt from the IPMI implementation.

The current design of the mctp-demux-daemon (included in the libmctp codebase) is intended to provide an interface that will be easy to migrate to a future kernel implementation (ie., using sockets to pass MCTP messages), and allows multiple applications to be listening for MCTP messages of different types.

Regards,


Jeremy

On 14 January 2020 1:54:49 pm AWST, "Khetan, Sharad" <sharad.khetan@intel.com<mailto:sharad.khetan@intel.com>> wrote:

Hi Deepak,

On 13/01/20 10:23 PM, Khetan, Sharad wrote:
Hi Andrew,

On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:


 On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
 Hi Andrew,
 Sorry for late response.
 The plan is to have MCTP in user space.


 How are you handling this then? mmap()'ing the BAR from sysfs?

Sorry, let me put my brain back in, I was thinking of the wrong side of the  BMC/Host MCTP channel. How much were you planning to do in userspace on the BMC? As in, are you planning to drive the BMC's PCIe MCTP controller from userspace (presumably via /dev/mem)?


For implementation on the BMC, we agree that it's better to do it in kernel (and as you mentioned  - use consistent interface to upper layers, provide another transport). However, given the time needed to implement things in kernel (and the review after), we are starting with a short term solution. We will be implementing MCTP (protocol elements) in user space, along with a low level MCTP PCIe driver just to push bits on PCIe. Iwona is working on this and should be able to describe the exact primitive.

Do you plan to do the user-space work as an extension to/reusing components from openbmc/libmctp?

Thanks,
Deepak

Yes we plan to reuse and extend the libmctp, support PCIe as well as SMBus bindings. We plan to use d-bus extensions to existing libmctp. That said, we will know the exact extent of reuse/modifications when we really start implementing.

We are implementing this for AST 2600 (will not support any workarounds for AST 2500 bug).

@Andrew, Thanks for your response.

Thanks,
Sharad


[-- Attachment #2: Type: text/html, Size: 3607 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-14  6:39                     ` Khetan, Sharad
@ 2020-01-14  8:10                       ` Deepak Kodihalli
  2020-01-14 15:54                       ` Thomaiyar, Richard Marian
  1 sibling, 0 replies; 27+ messages in thread
From: Deepak Kodihalli @ 2020-01-14  8:10 UTC (permalink / raw)
  To: Khetan, Sharad, Jeremy Kerr
  Cc: Andrew Jeffery, Vijay Khemka, rgrs, openbmc, Winiarska, Iwona,
	Bhat, Sumanth

On 14/01/20 12:09 PM, Khetan, Sharad wrote:
> Thanks for the pointer Jeremy. We will look into demux daemon.
> Thanks,
> -Sharad
> 
> On Jan 13, 2020, at 10:21 PM, Jeremy Kerr <jk@ozlabs.org 
> <mailto:jk@ozlabs.org>> wrote:
> 
>> Hi Ketan,
>>
>> Just a suggestion - you probably don't want to be passing MCTP 
>> messages over dbus - this is something we learnt from the IPMI 
>> implementation.

We could still have D-Bus endpoints that enable the BMC to identify MCTP 
devices connected to the BMC, jus that (agreeing with Jeremy here) the 
actual Tx/Rx could be backed the socket/kernel API.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: MCTP over PCI on AST2500
  2020-01-14  5:54                 ` Khetan, Sharad
  2020-01-14  6:20                   ` Jeremy Kerr
@ 2020-01-14  8:54                   ` rgrs
  1 sibling, 0 replies; 27+ messages in thread
From: rgrs @ 2020-01-14  8:54 UTC (permalink / raw)
  To: Khetan, Sharad
  Cc: Deepak Kodihalli, Andrew Jeffery, Vijay Khemka, openbmc,
	Jeremy Kerr, Winiarska, Iwona, Bhat, Sumanth

Hi Sharad,

Please can you clarify what you meant by,

> "We are implementing this for AST 2600 (will not support any workarounds for AST 2500 bug)."

I'm assuming implementation will work with AST2500 (with vulnerabilities) and AST2600.

Is my assumption correct or did you mean no support for AST2500?

Thanks,
Raj

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, January 14, 2020 11:24 AM, Khetan, Sharad <sharad.khetan@intel.com> wrote:

> Hi Deepak,
>
> On 13/01/20 10:23 PM, Khetan, Sharad wrote:
>
> > Hi Andrew,
> > On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:
> >
> > > On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
> > >
> > > > Hi Andrew,
> > > > Sorry for late response.
> > > > The plan is to have MCTP in user space.
> > >
> > > How are you handling this then? mmap()'ing the BAR from sysfs?
> >
> > > Sorry, let me put my brain back in, I was thinking of the wrong side of the BMC/Host MCTP channel. How much were you planning to do in userspace on the BMC? As in, are you planning to drive the BMC's PCIe MCTP controller from userspace (presumably via /dev/mem)?
> >
> > For implementation on the BMC, we agree that it's better to do it in kernel (and as you mentioned - use consistent interface to upper layers, provide another transport). However, given the time needed to implement things in kernel (and the review after), we are starting with a short term solution. We will be implementing MCTP (protocol elements) in user space, along with a low level MCTP PCIe driver just to push bits on PCIe. Iwona is working on this and should be able to describe the exact primitive.
>
> Do you plan to do the user-space work as an extension to/reusing components from openbmc/libmctp?
>
> Thanks,
> Deepak
>
> Yes we plan to reuse and extend the libmctp, support PCIe as well as SMBus bindings. We plan to use d-bus extensions to existing libmctp. That said, we will know the exact extent of reuse/modifications when we really start implementing.
>
> We are implementing this for AST 2600 (will not support any workarounds for AST 2500 bug).
>
> @Andrew, Thanks for your response.
>
> Thanks,
> Sharad

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-14  6:39                     ` Khetan, Sharad
  2020-01-14  8:10                       ` Deepak Kodihalli
@ 2020-01-14 15:54                       ` Thomaiyar, Richard Marian
  1 sibling, 0 replies; 27+ messages in thread
From: Thomaiyar, Richard Marian @ 2020-01-14 15:54 UTC (permalink / raw)
  To: Khetan, Sharad, Jeremy Kerr
  Cc: Winiarska, Iwona, Andrew Jeffery, openbmc, rgrs, Bhat, Sumanth,
	Vijay Khemka

[-- Attachment #1: Type: text/plain, Size: 3693 bytes --]

Yes Jeremy. We are aware about the limitation, but as Sharad stated, we 
will be starting with D-Bus based approach, due to priority ( and then 
move to socket based approach, at-least not immediately).

Having said that pushed a WIP document for both MCTP & PLDM (still in 
high level, and low level implementation details capturing are in 
progress). We can further discuss in the review (for better tracking).

1. https://gerrit.openbmc-project.xyz/#/c/openbmc/docs/+/28424/

2. https://gerrit.openbmc-project.xyz/#/c/openbmc/docs/+/28425/

Note: Proposal is to have a abstraction layer between D-Bus & Socket, so 
that upper layers like PLDM can switch to socket-driver based approach 
at later stage.

Related to MCTP over PCIe Iowna will send out a review which will be 
along the MCTP base design document.

Regards,

Richard


On 1/14/2020 12:09 PM, Khetan, Sharad wrote:
> Thanks for the pointer Jeremy. We will look into demux daemon.
> Thanks,
> -Sharad
>
> On Jan 13, 2020, at 10:21 PM, Jeremy Kerr <jk@ozlabs.org 
> <mailto:jk@ozlabs.org>> wrote:
>
>> Hi Ketan,
>>
>> Just a suggestion - you probably don't want to be passing MCTP 
>> messages over dbus - this is something we learnt from the IPMI 
>> implementation.
>>
>> The current design of the mctp-demux-daemon (included in the libmctp 
>> codebase) is intended to provide an interface that will be easy to 
>> migrate to a future kernel implementation (ie., using sockets to pass 
>> MCTP messages), and allows multiple applications to be listening for 
>> MCTP messages of different types.
>>
>> Regards,
>>
>>
>> Jeremy
>>
>> On 14 January 2020 1:54:49 pm AWST, "Khetan, Sharad" 
>> <sharad.khetan@intel.com <mailto:sharad.khetan@intel.com>> wrote:
>>
>>     Hi Deepak,
>>
>>     On 13/01/20 10:23 PM, Khetan, Sharad wrote:
>>
>>         Hi Andrew, On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:
>>
>>             On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
>>
>>                 Hi Andrew, Sorry for late response. The plan is to
>>                 have MCTP in user space. 
>>
>>             How are you handling this then? mmap()'ing the BAR from
>>             sysfs? 
>>
>>             Sorry, let me put my brain back in, I was thinking of the
>>             wrong side of the BMC/Host MCTP channel. How much were
>>             you planning to do in userspace on the BMC? As in, are
>>             you planning to drive the BMC's PCIe MCTP controller from
>>             userspace (presumably via /dev/mem)? 
>>
>>         For implementation on the BMC, we agree that it's better to
>>         do it in kernel (and as you mentioned - use consistent
>>         interface to upper layers, provide another transport).
>>         However, given the time needed to implement things in kernel
>>         (and the review after), we are starting with a short term
>>         solution. We will be implementing MCTP (protocol elements) in
>>         user space, along with a low level MCTP PCIe driver just to
>>         push bits on PCIe. Iwona is working on this and should be
>>         able to describe the exact primitive. 
>>
>>
>>     Do you plan to do the user-space work as an extension to/reusing components from openbmc/libmctp?
>>
>>     Thanks,
>>     Deepak
>>
>>     Yes we plan to reuse and extend the libmctp, support PCIe as well as SMBus bindings. We plan to use d-bus extensions to existing libmctp. That said, we will know the exact extent of reuse/modifications when we really start implementing.
>>
>>     We are implementing this for AST 2600 (will not support any workarounds for AST 2500 bug).
>>
>>     @Andrew, Thanks for your response.
>>
>>     Thanks,
>>     Sharad
>>       
>>

[-- Attachment #2: Type: text/html, Size: 5392 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-14  6:20                   ` Jeremy Kerr
  2020-01-14  6:39                     ` Khetan, Sharad
@ 2020-01-14 17:45                     ` Patrick Williams
  2020-01-15 13:51                       ` Jeremy Kerr
  1 sibling, 1 reply; 27+ messages in thread
From: Patrick Williams @ 2020-01-14 17:45 UTC (permalink / raw)
  To: Jeremy Kerr
  Cc: Khetan, Sharad, Deepak Kodihalli, Andrew Jeffery, Vijay Khemka,
	rgrs, openbmc, Bhat, Sumanth, Winiarska, Iwona

[-- Attachment #1: Type: text/plain, Size: 551 bytes --]

Hello Jeremy,

On Tue, Jan 14, 2020 at 02:20:52PM +0800, Jeremy Kerr wrote:
> Hi Ketan,
> 
> Just a suggestion - you probably don't want to be passing MCTP messages over dbus - this is something we learnt from the IPMI implementation.

Is there a pointer to this "lesson learned" or the issues surrounding
it?  It seems like the btbridge is still using dbus, so I assume
host-ipmid is as well.

https://github.com/openbmc/btbridge/blob/master/btbridged.c#L47

I'm curious to understand what the issues were/are.

-- 
Patrick Williams

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-14 17:45                     ` Patrick Williams
@ 2020-01-15 13:51                       ` Jeremy Kerr
  2020-01-15 14:16                         ` Patrick Williams
  0 siblings, 1 reply; 27+ messages in thread
From: Jeremy Kerr @ 2020-01-15 13:51 UTC (permalink / raw)
  To: Patrick Williams
  Cc: Khetan, Sharad, Deepak Kodihalli, Andrew Jeffery, Vijay Khemka,
	rgrs, openbmc, Bhat, Sumanth, Winiarska, Iwona

Hi Patrick,

> > Just a suggestion - you probably don't want to be passing MCTP
> > messages over dbus - this is something we learnt from the IPMI
> > implementation.
> 
> Is there a pointer to this "lesson learned" or the issues surrounding
> it?  It seems like the btbridge is still using dbus, so I assume
> host-ipmid is as well.
> 
> https://github.com/openbmc/btbridge/blob/master/btbridged.c#L47
> 
> I'm curious to understand what the issues were/are.

No, nothing that anyone had specifically documented. I recall there
were concerns with shuffling larger amounts of data over dbus,
particularly for things like firmware update over IPMI. Because we're
using a dbus signal for incoming messages, we could potentially be
writing a lot of data to multiple processes - and more than necessary
if those processes haven't set up their dbus matches correctly.

I don't think there's enough concern to change existing code over for,
more just a consideration for future designs, of which this is one.

Cheers,


Jeremy

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MCTP over PCI on AST2500
  2020-01-15 13:51                       ` Jeremy Kerr
@ 2020-01-15 14:16                         ` Patrick Williams
  0 siblings, 0 replies; 27+ messages in thread
From: Patrick Williams @ 2020-01-15 14:16 UTC (permalink / raw)
  To: Jeremy Kerr
  Cc: Khetan, Sharad, Deepak Kodihalli, Andrew Jeffery, Vijay Khemka,
	rgrs, openbmc, Bhat, Sumanth, Winiarska, Iwona

[-- Attachment #1: Type: text/plain, Size: 1257 bytes --]

On Wed, Jan 15, 2020 at 09:51:38PM +0800, Jeremy Kerr wrote:
> Hi Patrick,
> 
> > > Just a suggestion - you probably don't want to be passing MCTP
> > > messages over dbus - this is something we learnt from the IPMI
> > > implementation.
> > 
> > Is there a pointer to this "lesson learned" or the issues surrounding
> > it?  It seems like the btbridge is still using dbus, so I assume
> > host-ipmid is as well.
> > 
> > https://github.com/openbmc/btbridge/blob/master/btbridged.c#L47
> > 
> > I'm curious to understand what the issues were/are.
> 
> No, nothing that anyone had specifically documented. I recall there
> were concerns with shuffling larger amounts of data over dbus,
> particularly for things like firmware update over IPMI. Because we're
> using a dbus signal for incoming messages, we could potentially be
> writing a lot of data to multiple processes - and more than necessary
> if those processes haven't set up their dbus matches correctly.
> 
> I don't think there's enough concern to change existing code over for,
> more just a consideration for future designs, of which this is one.
> 

Maybe by that time bus1 will be a mature replacement for dbus. ;)

Thanks for the reply.

-- 
Patrick Williams

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2020-01-15 14:17 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-20  5:26 MCTP over PCI on AST2500 rgrs
2019-11-20  6:54 ` Vijay Khemka
2019-11-20  6:59   ` Khetan, Sharad
2019-11-22  0:38     ` Andrew Jeffery
2019-12-21  0:15       ` Khetan, Sharad
2020-01-09  1:57         ` Andrew Jeffery
2020-01-09 18:17           ` Vijay Khemka
2020-01-09 20:45             ` Richard Hanley
2020-01-10  1:29               ` Andrew Jeffery
2020-01-10  0:30           ` Andrew Jeffery
2020-01-13 16:53             ` Khetan, Sharad
2020-01-13 18:54               ` Deepak Kodihalli
2020-01-14  5:54                 ` Khetan, Sharad
2020-01-14  6:20                   ` Jeremy Kerr
2020-01-14  6:39                     ` Khetan, Sharad
2020-01-14  8:10                       ` Deepak Kodihalli
2020-01-14 15:54                       ` Thomaiyar, Richard Marian
2020-01-14 17:45                     ` Patrick Williams
2020-01-15 13:51                       ` Jeremy Kerr
2020-01-15 14:16                         ` Patrick Williams
2020-01-14  8:54                   ` rgrs
2020-01-13 23:22               ` Andrew Jeffery
2020-01-10  3:40           ` Michael Richardson
2020-01-10  5:05             ` Andrew Jeffery
2020-01-10 15:38               ` Michael Richardson
2020-01-12 23:38                 ` Andrew Jeffery
2020-01-13 17:09                   ` Michael Richardson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.