All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: PECI API?
@ 2017-10-23 17:03 Jae Hyun Yoo
  2017-10-30 17:45 ` Patrick Venture
  2017-10-30 19:21 ` Brad Bishop
  0 siblings, 2 replies; 19+ messages in thread
From: Jae Hyun Yoo @ 2017-10-23 17:03 UTC (permalink / raw)
  To: openbmc; +Cc: james.feist, ed.tanous

[-- Attachment #1: Type: text/plain, Size: 586 bytes --]

Hi Dave,

 

I'm currently working on PECI kernel driver and hwmon driver implementation.
The kernel driver would provide these PECI commands as ioctls:

 

- low-level PECI xfer command

- Ping()

- GetDIB()

- GetTemp()

- RdPkgConfig()

- WrPkgConfig()

- RdIAMSR()

- RdPCIConfigLocal()

- WrPCIConfigLocal()

- RdPCIConfig()

 

Also, through the hwmon driver, these temperature monitoring features would
be provided:

 

- Core temperature

- DTS thermal margin (hysteresis)

- DDR DIMM temperature

- etc.

 

Patches will come in to upstream when it is ready.

 

Cheers,

Jae


[-- Attachment #2: Type: text/html, Size: 9621 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-23 17:03 PECI API? Jae Hyun Yoo
@ 2017-10-30 17:45 ` Patrick Venture
  2017-10-30 19:21 ` Brad Bishop
  1 sibling, 0 replies; 19+ messages in thread
From: Patrick Venture @ 2017-10-30 17:45 UTC (permalink / raw)
  To: Jae Hyun Yoo; +Cc: OpenBMC Maillist, ed.tanous, james.feist

Thanks!!! I'm very excited.

On Mon, Oct 23, 2017 at 10:03 AM, Jae Hyun Yoo
<jae.hyun.yoo@linux.intel.com> wrote:
> Hi Dave,
>
>
>
> I’m currently working on PECI kernel driver and hwmon driver implementation.
> The kernel driver would provide these PECI commands as ioctls:
>
>
>
> - low-level PECI xfer command
>
> - Ping()
>
> - GetDIB()
>
> - GetTemp()
>
> - RdPkgConfig()
>
> - WrPkgConfig()
>
> - RdIAMSR()
>
> - RdPCIConfigLocal()
>
> - WrPCIConfigLocal()
>
> - RdPCIConfig()
>
>
>
> Also, through the hwmon driver, these temperature monitoring features would
> be provided:
>
>
>
> - Core temperature
>
> - DTS thermal margin (hysteresis)
>
> - DDR DIMM temperature
>
> - etc.
>
>
>
> Patches will come in to upstream when it is ready.
>
>
>
> Cheers,
>
> Jae

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-23 17:03 PECI API? Jae Hyun Yoo
  2017-10-30 17:45 ` Patrick Venture
@ 2017-10-30 19:21 ` Brad Bishop
  2017-10-31 16:26   ` Jae Hyun Yoo
  1 sibling, 1 reply; 19+ messages in thread
From: Brad Bishop @ 2017-10-30 19:21 UTC (permalink / raw)
  To: Jae Hyun Yoo; +Cc: openbmc, ed.tanous, james.feist


> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
> 
> Hi Dave,
>  
> I’m currently working on PECI kernel driver

I’m curious about the high level structure.  I’m sure others are as well.  Anything
you can share would be informative and appreciated!

A couple questions that popped into my head:

 - Would there be a new Linux bus type or core framework for this?  
 - How many drivers would there be for a full stack.  Something like this?
     - client? (hwmon, for example)
     - core? (common code)
     - bmc specific implementation? (aspeed, nuvoton, emulated differences)
 - Have you considered using DT bindings and/or how they would look?

These questions are motivated by the recent upstreaming experience with FSI
(flexible support interface) where a similar structure was used.  FSI on POWER
feels similar to PECI in terms of usage and features so I thought I’d just throw
this out there as a possible reference point to consider.

> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>  
> - low-level PECI xfer command

Would a separate ‘dev’ driver similar to i2c-dev make sense here?  Just
thinking out loud.

> - Ping()
> - GetDIB()
> - GetTemp()
> - RdPkgConfig()
> - WrPkgConfig()
> - RdIAMSR()
> - RdPCIConfigLocal()
> - WrPCIConfigLocal()
> - RdPCIConfig()
>  
> Also, through the hwmon driver, these temperature monitoring features would be provided:
>  
> - Core temperature
> - DTS thermal margin (hysteresis)
> - DDR DIMM temperature
> - etc.

Sweet!

>  
> Patches will come in to upstream when it is ready.
>  
> Cheers,
> Jae

For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016
but it didn’t go anywhere:

https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html

thx - brad

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: PECI API?
  2017-10-30 19:21 ` Brad Bishop
@ 2017-10-31 16:26   ` Jae Hyun Yoo
  2017-10-31 18:29     ` Rick Altherr
  0 siblings, 1 reply; 19+ messages in thread
From: Jae Hyun Yoo @ 2017-10-31 16:26 UTC (permalink / raw)
  To: 'Brad Bishop'; +Cc: openbmc, ed.tanous, james.feist

>> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
>> 
>> Hi Dave,
>>  
>> I'm currently working on PECI kernel driver
>
> I'm curious about the high level structure.  I'm sure others are as well.
> Anything you can share would be informative and appreciated!
>
> A couple questions that popped into my head:
>
>  - Would there be a new Linux bus type or core framework for this?  
>  - How many drivers would there be for a full stack.  Something like this?
>      - client? (hwmon, for example)
>      - core? (common code)
>      - bmc specific implementation? (aspeed, nuvoton, emulated differences)
>  - Have you considered using DT bindings and/or how they would look?
>
> These questions are motivated by the recent upstreaming experience with
> FSI (flexible support interface) where a similar structure was used.
> FSI on POWER feels similar to PECI in terms of usage and features so I thought
> I'd just throw this out there as a possible reference point to consider.

PECI is using single-wired interface which is different from other popular
interfaces such as I2C and MTD, and therefore it doesn't have any common core framework
in kernel so I'm adding the PECI main contorl driver as an misc type and the other
one into hwmon subsystem. The reason why I seperate the implementation into two
drivers is, PECI can be used not only for temperature monitoring but also for
platform manageability, processor diagnostics and failure analysis, so the misc
control driver will be used as a common PECI driver for all those purposes flexibly
and the hwmon subsystem driver will use the common PECI driver just for
temperature monitoring. These drivers will be BMC specific implementation
which support Aspeed shipset only. Support for Nuvoton chipset was not considered
in my implementation because Nuvoton has different HW and register scheme, also
Nuvoton already has its dedicated driver implementation in hwmon subsystem
for their each chipset variant (nct6683.c  nct6775.c  nct7802.c).

>> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>>  
>> - low-level PECI xfer command
>
> Would a separate 'dev' driver similar to i2c-dev make sense here?  Just thinking out loud.
>

Yes, drivers will be seperated into two but it's hard to say that this way is similar to i2c-dev.
It would have a bit different shape.

>> - Ping()
>> - GetDIB()
>> - GetTemp()
>> - RdPkgConfig()
>> - WrPkgConfig()
>> - RdIAMSR()
>> - RdPCIConfigLocal()
>> - WrPCIConfigLocal()
>> - RdPCIConfig()
>>  
>> Also, through the hwmon driver, these temperature monitoring features would be provided:
>>  
>> - Core temperature
>> - DTS thermal margin (hysteresis)
>> - DDR DIMM temperature
>> - etc.
>
> Sweet!
>
>>  
>> Patches will come in to upstream when it is ready.
>>  
>> Cheers,
>> Jae
>
> For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn't go anywhere:
>
> https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html
>
> thx - brad
>

My implementation is also heavily based on the Aspeed SDK driver but modified a lot to provide
more suitable functionality for openbmc project. Hopefully, it could be introduced soon.

thx,
Jae

-----Original Message-----
From: Brad Bishop [mailto:bradleyb@fuzziesquirrel.com] 
Sent: Monday, October 30, 2017 12:22 PM
To: Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com>
Cc: openbmc@lists.ozlabs.org; ed.tanous@linux.intel.com; james.feist@linux.intel.com
Subject: Re: PECI API?


> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
> 
> Hi Dave,
>  
> I’m currently working on PECI kernel driver

I’m curious about the high level structure.  I’m sure others are as well.  Anything you can share would be informative and appreciated!

A couple questions that popped into my head:

 - Would there be a new Linux bus type or core framework for this?  
 - How many drivers would there be for a full stack.  Something like this?
     - client? (hwmon, for example)
     - core? (common code)
     - bmc specific implementation? (aspeed, nuvoton, emulated differences)
 - Have you considered using DT bindings and/or how they would look?

These questions are motivated by the recent upstreaming experience with FSI (flexible support interface) where a similar structure was used.  FSI on POWER feels similar to PECI in terms of usage and features so I thought I’d just throw this out there as a possible reference point to consider.

> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>  
> - low-level PECI xfer command

Would a separate ‘dev’ driver similar to i2c-dev make sense here?  Just thinking out loud.

> - Ping()
> - GetDIB()
> - GetTemp()
> - RdPkgConfig()
> - WrPkgConfig()
> - RdIAMSR()
> - RdPCIConfigLocal()
> - WrPCIConfigLocal()
> - RdPCIConfig()
>  
> Also, through the hwmon driver, these temperature monitoring features would be provided:
>  
> - Core temperature
> - DTS thermal margin (hysteresis)
> - DDR DIMM temperature
> - etc.

Sweet!

>  
> Patches will come in to upstream when it is ready.
>  
> Cheers,
> Jae

For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn’t go anywhere:

https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html

thx - brad

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-31 16:26   ` Jae Hyun Yoo
@ 2017-10-31 18:29     ` Rick Altherr
  2017-10-31 21:50       ` Jae Hyun Yoo
  0 siblings, 1 reply; 19+ messages in thread
From: Rick Altherr @ 2017-10-31 18:29 UTC (permalink / raw)
  To: Jae Hyun Yoo; +Cc: Brad Bishop, OpenBMC Maillist, ed.tanous, james.feist

On Tue, Oct 31, 2017 at 9:26 AM, Jae Hyun Yoo
<jae.hyun.yoo@linux.intel.com> wrote:
>>> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
>>>
>>> Hi Dave,
>>>
>>> I'm currently working on PECI kernel driver
>>
>> I'm curious about the high level structure.  I'm sure others are as well.
>> Anything you can share would be informative and appreciated!
>>
>> A couple questions that popped into my head:
>>
>>  - Would there be a new Linux bus type or core framework for this?
>>  - How many drivers would there be for a full stack.  Something like this?
>>      - client? (hwmon, for example)
>>      - core? (common code)
>>      - bmc specific implementation? (aspeed, nuvoton, emulated differences)
>>  - Have you considered using DT bindings and/or how they would look?
>>
>> These questions are motivated by the recent upstreaming experience with
>> FSI (flexible support interface) where a similar structure was used.
>> FSI on POWER feels similar to PECI in terms of usage and features so I thought
>> I'd just throw this out there as a possible reference point to consider.
>
> PECI is using single-wired interface which is different from other popular
> interfaces such as I2C and MTD, and therefore it doesn't have any common core framework
> in kernel so I'm adding the PECI main contorl driver as an misc type and the other
> one into hwmon subsystem. The reason why I seperate the implementation into two
> drivers is, PECI can be used not only for temperature monitoring but also for
> platform manageability, processor diagnostics and failure analysis, so the misc
> control driver will be used as a common PECI driver for all those purposes flexibly
> and the hwmon subsystem driver will use the common PECI driver just for
> temperature monitoring. These drivers will be BMC specific implementation
> which support Aspeed shipset only. Support for Nuvoton chipset was not considered
> in my implementation because Nuvoton has different HW and register scheme, also
> Nuvoton already has its dedicated driver implementation in hwmon subsystem
> for their each chipset variant (nct6683.c  nct6775.c  nct7802.c).
>

Nuvoton is starting to submit support for their Poleg BMC to upstream
(http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/538226.html).
This BMC includes a PECI controller similar to the Aspeed design but
with a different register layout.  At a minimum, the misc driver needs
to support multiple backend drivers to allow Nuvoton to implement the
same interface.  The chips you listed that are already in hwmon are
for Nuvoton's SuperIOs, not their BMCs.

>>> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>>>
>>> - low-level PECI xfer command
>>
>> Would a separate 'dev' driver similar to i2c-dev make sense here?  Just thinking out loud.
>>
>
> Yes, drivers will be seperated into two but it's hard to say that this way is similar to i2c-dev.
> It would have a bit different shape.
>

I'm not terribly familiar with the PECI protocol.  I'll see about
getting a copy of the spec.  From what I can find via searches, it
looks like individual nodes on the bus are addressed similar to I2C.
I'd expect that to be similar to how i2c-dev is structured: a kobject
per master and a kobject per address on the bus.  That way, drivers
can be bound to individual addresses. The misc driver would focus on
exposing interacting with a specific address on the bus in a generic
fashion.

>>> - Ping()
>>> - GetDIB()
>>> - GetTemp()
>>> - RdPkgConfig()
>>> - WrPkgConfig()
>>> - RdIAMSR()
>>> - RdPCIConfigLocal()
>>> - WrPCIConfigLocal()
>>> - RdPCIConfig()
>>>
>>> Also, through the hwmon driver, these temperature monitoring features would be provided:
>>>
>>> - Core temperature
>>> - DTS thermal margin (hysteresis)
>>> - DDR DIMM temperature
>>> - etc.
>>
>> Sweet!
>>
>>>
>>> Patches will come in to upstream when it is ready.
>>>
>>> Cheers,
>>> Jae
>>
>> For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn't go anywhere:
>>
>> https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html
>>
>> thx - brad
>>
>
> My implementation is also heavily based on the Aspeed SDK driver but modified a lot to provide
> more suitable functionality for openbmc project. Hopefully, it could be introduced soon.
>
> thx,
> Jae
>
> -----Original Message-----
> From: Brad Bishop [mailto:bradleyb@fuzziesquirrel.com]
> Sent: Monday, October 30, 2017 12:22 PM
> To: Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com>
> Cc: openbmc@lists.ozlabs.org; ed.tanous@linux.intel.com; james.feist@linux.intel.com
> Subject: Re: PECI API?
>
>
>> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
>>
>> Hi Dave,
>>
>> I’m currently working on PECI kernel driver
>
> I’m curious about the high level structure.  I’m sure others are as well.  Anything you can share would be informative and appreciated!
>
> A couple questions that popped into my head:
>
>  - Would there be a new Linux bus type or core framework for this?
>  - How many drivers would there be for a full stack.  Something like this?
>      - client? (hwmon, for example)
>      - core? (common code)
>      - bmc specific implementation? (aspeed, nuvoton, emulated differences)
>  - Have you considered using DT bindings and/or how they would look?
>
> These questions are motivated by the recent upstreaming experience with FSI (flexible support interface) where a similar structure was used.  FSI on POWER feels similar to PECI in terms of usage and features so I thought I’d just throw this out there as a possible reference point to consider.
>
>> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>>
>> - low-level PECI xfer command
>
> Would a separate ‘dev’ driver similar to i2c-dev make sense here?  Just thinking out loud.
>
>> - Ping()
>> - GetDIB()
>> - GetTemp()
>> - RdPkgConfig()
>> - WrPkgConfig()
>> - RdIAMSR()
>> - RdPCIConfigLocal()
>> - WrPCIConfigLocal()
>> - RdPCIConfig()
>>
>> Also, through the hwmon driver, these temperature monitoring features would be provided:
>>
>> - Core temperature
>> - DTS thermal margin (hysteresis)
>> - DDR DIMM temperature
>> - etc.
>
> Sweet!
>
>>
>> Patches will come in to upstream when it is ready.
>>
>> Cheers,
>> Jae
>
> For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn’t go anywhere:
>
> https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html
>
> thx - brad
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: PECI API?
  2017-10-31 18:29     ` Rick Altherr
@ 2017-10-31 21:50       ` Jae Hyun Yoo
  2017-10-31 22:07         ` Rick Altherr
  0 siblings, 1 reply; 19+ messages in thread
From: Jae Hyun Yoo @ 2017-10-31 21:50 UTC (permalink / raw)
  To: 'Rick Altherr'
  Cc: 'Brad Bishop', 'OpenBMC Maillist',
	ed.tanous, james.feist

>> On Tue, Oct 31, 2017 at 9:26 AM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
>>>> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
>>>>
>>>> Hi Dave,
>>>>
>>>> I'm currently working on PECI kernel driver
>>>
>>> I'm curious about the high level structure.  I'm sure others are as well.
>>> Anything you can share would be informative and appreciated!
>>>
>>> A couple questions that popped into my head:
>>>
>>>  - Would there be a new Linux bus type or core framework for this?
>>>  - How many drivers would there be for a full stack.  Something like this?
>>>      - client? (hwmon, for example)
>>>      - core? (common code)
>>>      - bmc specific implementation? (aspeed, nuvoton, emulated 
>>> differences)
>>>  - Have you considered using DT bindings and/or how they would look?
>>>
>>> These questions are motivated by the recent upstreaming experience 
>>> with FSI (flexible support interface) where a similar structure was used.
>>> FSI on POWER feels similar to PECI in terms of usage and features so 
>>> I thought I'd just throw this out there as a possible reference point to consider.
>>
>> PECI is using single-wired interface which is different from other 
>> popular interfaces such as I2C and MTD, and therefore it doesn't have 
>> any common core framework in kernel so I'm adding the PECI main 
>> contorl driver as an misc type and the other one into hwmon subsystem. 
>> The reason why I seperate the implementation into two drivers is, PECI 
>> can be used not only for temperature monitoring but also for platform 
>> manageability, processor diagnostics and failure analysis, so the misc 
>> control driver will be used as a common PECI driver for all those 
>> purposes flexibly and the hwmon subsystem driver will use the common 
>> PECI driver just for temperature monitoring. These drivers will be BMC 
>> specific implementation which support Aspeed shipset only. Support for 
>> Nuvoton chipset was not considered in my implementation because 
>> Nuvoton has different HW and register scheme, also Nuvoton already has
>> its dedicated driver implementation in hwmon subsystem for their each
>> chipset variant (nct6683.c  nct6775.c  nct7802.c).
>>
>
> Nuvoton is starting to submit support for their Poleg BMC to upstream
> (http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/538226.html).
> This BMC includes a PECI controller similar to the Aspeed design but
> with a different register layout.  At a minimum, the misc driver needs
> to support multiple backend drivers to allow Nuvoton to implement the same
> interface.  The chips you listed that are already in hwmon are for
> Nuvoton's SuperIOs, not their BMCs.
>

Thanks for your pointing out of the current Poleg BMC upstreaming. I didn't know about that before.
Ideally, it would be great if we support all BMC PECI controllers in a single device driver
but we should consider some dependencies such as SCU register setting in bootloader, clock setting
for the PECI controller HW block and etc that would vary on each BMC controller chipset. 
Usually, these dependencies should be covered by kernel config and device tree settings.
My thought is, each BMC controller should have its own PECI misc driver then we could
selectively enable one by kernel configuration.

>>>> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>>>>
>>>> - low-level PECI xfer command
>>>
>>> Would a separate 'dev' driver similar to i2c-dev make sense here?  Just thinking out loud.
>>>
>>
>> Yes, drivers will be seperated into two but it's hard to say that this way is similar to i2c-dev.
>> It would have a bit different shape.
>>
>
> I'm not terribly familiar with the PECI protocol.  I'll see about getting a copy of the spec.
> From what I can find via searches, it looks like individual nodes on the bus are addressed similar to I2C.
> I'd expect that to be similar to how i2c-dev is structured: a kobject per master and a kobject
> per address on the bus.  That way, drivers can be bound to individual addresses. The misc driver
> would focus on exposing interacting with a specific address on the bus in a generic fashion.
>

As you said, it would be very useful if kernel has core bus framework like I2C, but current kernel
doesn't have the core bus framework for PECI, and it would be a hugh project itself if we are going to
implement one. Generally, PECI bus topology is very simple unlike I2C. Usually in a single system,
there is only one BMC controller and it has connections with CPUs, that's all. I don't see an advantage
of using core bus framework on this simple interface.  

>>>> - Ping()
>>>> - GetDIB()
>>>> - GetTemp()
>>>> - RdPkgConfig()
>>>> - WrPkgConfig()
>>>> - RdIAMSR()
>>>> - RdPCIConfigLocal()
>>>> - WrPCIConfigLocal()
>>>> - RdPCIConfig()
>>>>
>>>> Also, through the hwmon driver, these temperature monitoring features would be provided:
>>>>
>>>> - Core temperature
>>>> - DTS thermal margin (hysteresis)
>>>> - DDR DIMM temperature
>>>> - etc.
>>>
>>> Sweet!
>>>
>>>>
>>>> Patches will come in to upstream when it is ready.
>>>>
>>>> Cheers,
>>>> Jae
>>>
>>> For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn't go anywhere:
>>>
>>> https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html
>>>
>>> thx - brad
>>>
>>
>> My implementation is also heavily based on the Aspeed SDK driver but 
>> modified a lot to provide more suitable functionality for openbmc project. Hopefully, it could be introduced soon.
>>
>> thx,
>> Jae

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-31 21:50       ` Jae Hyun Yoo
@ 2017-10-31 22:07         ` Rick Altherr
  2017-11-01 16:45           ` Jae Hyun Yoo
  0 siblings, 1 reply; 19+ messages in thread
From: Rick Altherr @ 2017-10-31 22:07 UTC (permalink / raw)
  To: Jae Hyun Yoo; +Cc: Brad Bishop, OpenBMC Maillist, ed.tanous, james.feist

On Tue, Oct 31, 2017 at 2:50 PM, Jae Hyun Yoo
<jae.hyun.yoo@linux.intel.com> wrote:
>
> >> On Tue, Oct 31, 2017 at 9:26 AM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
> >>>> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
> >>>>
> >>>> Hi Dave,
> >>>>
> >>>> I'm currently working on PECI kernel driver
> >>>
> >>> I'm curious about the high level structure.  I'm sure others are as well.
> >>> Anything you can share would be informative and appreciated!
> >>>
> >>> A couple questions that popped into my head:
> >>>
> >>>  - Would there be a new Linux bus type or core framework for this?
> >>>  - How many drivers would there be for a full stack.  Something like this?
> >>>      - client? (hwmon, for example)
> >>>      - core? (common code)
> >>>      - bmc specific implementation? (aspeed, nuvoton, emulated
> >>> differences)
> >>>  - Have you considered using DT bindings and/or how they would look?
> >>>
> >>> These questions are motivated by the recent upstreaming experience
> >>> with FSI (flexible support interface) where a similar structure was used.
> >>> FSI on POWER feels similar to PECI in terms of usage and features so
> >>> I thought I'd just throw this out there as a possible reference point to consider.
> >>
> >> PECI is using single-wired interface which is different from other
> >> popular interfaces such as I2C and MTD, and therefore it doesn't have
> >> any common core framework in kernel so I'm adding the PECI main
> >> contorl driver as an misc type and the other one into hwmon subsystem.
> >> The reason why I seperate the implementation into two drivers is, PECI
> >> can be used not only for temperature monitoring but also for platform
> >> manageability, processor diagnostics and failure analysis, so the misc
> >> control driver will be used as a common PECI driver for all those
> >> purposes flexibly and the hwmon subsystem driver will use the common
> >> PECI driver just for temperature monitoring. These drivers will be BMC
> >> specific implementation which support Aspeed shipset only. Support for
> >> Nuvoton chipset was not considered in my implementation because
> >> Nuvoton has different HW and register scheme, also Nuvoton already has
> >> its dedicated driver implementation in hwmon subsystem for their each
> >> chipset variant (nct6683.c  nct6775.c  nct7802.c).
> >>
> >
> > Nuvoton is starting to submit support for their Poleg BMC to upstream
> > (http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/538226.html).
> > This BMC includes a PECI controller similar to the Aspeed design but
> > with a different register layout.  At a minimum, the misc driver needs
> > to support multiple backend drivers to allow Nuvoton to implement the same
> > interface.  The chips you listed that are already in hwmon are for
> > Nuvoton's SuperIOs, not their BMCs.
> >
>
> Thanks for your pointing out of the current Poleg BMC upstreaming. I didn't know about that before.
> Ideally, it would be great if we support all BMC PECI controllers in a single device driver
> but we should consider some dependencies such as SCU register setting in bootloader, clock setting
> for the PECI controller HW block and etc that would vary on each BMC controller chipset.
> Usually, these dependencies should be covered by kernel config and device tree settings.
> My thought is, each BMC controller should have its own PECI misc driver then we could
> selectively enable one by kernel configuration.
>

Are you expecting each BMC controller's PECI misc driver to
re-implement the device ioctls?  If I assume the misc device and ioctl
implementation are shared, I can't see how adding a subsystem would be
significantly more work.  Doing so would clarify what the boundaries
are between controller implementation and protocol behavior.

> >>>> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
> >>>>
> >>>> - low-level PECI xfer command
> >>>
> >>> Would a separate 'dev' driver similar to i2c-dev make sense here?  Just thinking out loud.
> >>>
> >>
> >> Yes, drivers will be seperated into two but it's hard to say that this way is similar to i2c-dev.
> >> It would have a bit different shape.
> >>
> >
> > I'm not terribly familiar with the PECI protocol.  I'll see about getting a copy of the spec.
> > From what I can find via searches, it looks like individual nodes on the bus are addressed similar to I2C.
> > I'd expect that to be similar to how i2c-dev is structured: a kobject per master and a kobject
> > per address on the bus.  That way, drivers can be bound to individual addresses. The misc driver
> > would focus on exposing interacting with a specific address on the bus in a generic fashion.
> >
>
> As you said, it would be very useful if kernel has core bus framework like I2C, but current kernel
> doesn't have the core bus framework for PECI, and it would be a hugh project itself if we are going to
> implement one.

Really?  IBM did so for FSI and it really helped with understanding the design.

> Generally, PECI bus topology is very simple unlike I2C. Usually in a single system,
> there is only one BMC controller and it has connections with CPUs, that's all. I don't see an advantage
> of using core bus framework on this simple interface.
>

Ideally, an hwmon driver for PECI on an Intel CPU only needs to know
how to issue PECI commands to that device.  What address it is at and
how the bus delivers the command to the node are irrelevant details.
How do you plan to describe the PECI bus in a dts?  Can I use the same
dt bindings for the Intel CPU's PECI interface for both Aspeed and
Nuvoton?

> >>>> - Ping()
> >>>> - GetDIB()
> >>>> - GetTemp()
> >>>> - RdPkgConfig()
> >>>> - WrPkgConfig()
> >>>> - RdIAMSR()
> >>>> - RdPCIConfigLocal()
> >>>> - WrPCIConfigLocal()
> >>>> - RdPCIConfig()
> >>>>
> >>>> Also, through the hwmon driver, these temperature monitoring features would be provided:
> >>>>
> >>>> - Core temperature
> >>>> - DTS thermal margin (hysteresis)
> >>>> - DDR DIMM temperature
> >>>> - etc.
> >>>
> >>> Sweet!
> >>>
> >>>>
> >>>> Patches will come in to upstream when it is ready.
> >>>>
> >>>> Cheers,
> >>>> Jae
> >>>
> >>> For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn't go anywhere:
> >>>
> >>> https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html
> >>>
> >>> thx - brad
> >>>
> >>
> >> My implementation is also heavily based on the Aspeed SDK driver but
> >> modified a lot to provide more suitable functionality for openbmc project. Hopefully, it could be introduced soon.
> >>
> >> thx,
> >> Jae
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: PECI API?
  2017-10-31 22:07         ` Rick Altherr
@ 2017-11-01 16:45           ` Jae Hyun Yoo
  2017-11-01 17:27             ` Rick Altherr
  0 siblings, 1 reply; 19+ messages in thread
From: Jae Hyun Yoo @ 2017-11-01 16:45 UTC (permalink / raw)
  To: 'Rick Altherr'
  Cc: 'Brad Bishop', 'OpenBMC Maillist',
	ed.tanous, james.feist

>>>> On Tue, Oct 31, 2017 at 9:26 AM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
>>>>>> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
>>>>>>
>>>>>> Hi Dave,
>>>>>>
>>>>>> I'm currently working on PECI kernel driver
>>>>>
>>>>> I'm curious about the high level structure.  I'm sure others are as well.
>>>>> Anything you can share would be informative and appreciated!
>>>>>
>>>>> A couple questions that popped into my head:
>>>>>
>>>>>  - Would there be a new Linux bus type or core framework for this?
>>>>>  - How many drivers would there be for a full stack.  Something like this?
>>>>>      - client? (hwmon, for example)
>>>>>      - core? (common code)
>>>>>      - bmc specific implementation? (aspeed, nuvoton, emulated
>>>>> differences)
>>>>>  - Have you considered using DT bindings and/or how they would look?
>>>>>
>>>>> These questions are motivated by the recent upstreaming experience 
>>>>> with FSI (flexible support interface) where a similar structure was used.
>>>>> FSI on POWER feels similar to PECI in terms of usage and features 
>>>>> so I thought I'd just throw this out there as a possible reference point to consider.
>>>>
>>>> PECI is using single-wired interface which is different from other 
>>>> popular interfaces such as I2C and MTD, and therefore it doesn't 
>>>> have any common core framework in kernel so I'm adding the PECI 
>>>> main contorl driver as an misc type and the other one into hwmon subsystem.
>>>> The reason why I seperate the implementation into two drivers is, 
>>>> PECI can be used not only for temperature monitoring but also for 
>>>> platform manageability, processor diagnostics and failure analysis, 
>>>> so the misc control driver will be used as a common PECI driver for 
>>>> all those purposes flexibly and the hwmon subsystem driver will use 
>>>> the common PECI driver just for temperature monitoring. These 
>>>> drivers will be BMC specific implementation which support Aspeed 
>>>> shipset only. Support for Nuvoton chipset was not considered in my 
>>>> implementation because Nuvoton has different HW and register 
>>>> scheme, also Nuvoton already has its dedicated driver 
>>>> implementation in hwmon subsystem for their each chipset variant (nct6683.c  nct6775.c  nct7802.c).
>>>>
>>>
>>> Nuvoton is starting to submit support for their Poleg BMC to 
>>> upstream (http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/538226.html).
>>> This BMC includes a PECI controller similar to the Aspeed design but 
>>> with a different register layout.  At a minimum, the misc driver 
>>> needs to support multiple backend drivers to allow Nuvoton to 
>>> implement the same interface.  The chips you listed that are already 
>>> in hwmon are for Nuvoton's SuperIOs, not their BMCs.
>>>
>>
>> Thanks for your pointing out of the current Poleg BMC upstreaming. I didn't know about that before.
>> Ideally, it would be great if we support all BMC PECI controllers in a 
>> single device driver but we should consider some dependencies such as 
>> SCU register setting in bootloader, clock setting for the PECI controller HW block and etc that would vary on each BMC controller chipset.
>> Usually, these dependencies should be covered by kernel config and device tree settings.
>> My thought is, each BMC controller should have its own PECI misc 
>> driver then we could selectively enable one by kernel configuration.
>>
>
> Are you expecting each BMC controller's PECI misc driver to re-implement the device ioctls?
> If I assume the misc device and ioctl implementation are shared, I can't see how adding a subsystem would be significantly more work.
> Doing so would clarify what the boundaries are between controller implementation and protocol behavior.
>

Okay, I agreed. That is reasonable concern. At least, if possible, we should provide compatible
ioctl set. I'll check its feasibility after getting Nuvoton's datasheet and their SDK.

>>>>>> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>>>>>>
>>>>>> - low-level PECI xfer command
>>>>>
>>>>> Would a separate 'dev' driver similar to i2c-dev make sense here?  Just thinking out loud.
>>>>>
>>>>
>>>> Yes, drivers will be seperated into two but it's hard to say that this way is similar to i2c-dev.
>>>> It would have a bit different shape.
>>>>
>>>
>>> I'm not terribly familiar with the PECI protocol.  I'll see about getting a copy of the spec.
>>> From what I can find via searches, it looks like individual nodes on the bus are addressed similar to I2C.
>>> I'd expect that to be similar to how i2c-dev is structured: a 
>>> kobject per master and a kobject per address on the bus.  That way, 
>>> drivers can be bound to individual addresses. The misc driver would focus on exposing interacting with a specific address on the bus in a generic fashion.
>>>
>>
>> As you said, it would be very useful if kernel has core bus framework 
>> like I2C, but current kernel doesn't have the core bus framework for 
>> PECI, and it would be a hugh project itself if we are going to implement one.
>
> Really?  IBM did so for FSI and it really helped with understanding the design.
>

Yes, IBM did really great work on FSI, Kudos to them.

>> Generally, PECI bus topology is very simple unlike I2C. Usually in a 
>> single system, there is only one BMC controller and it has connections 
>> with CPUs, that's all. I don't see an advantage of using core bus framework on this simple interface.
>>
>
> Ideally, an hwmon driver for PECI on an Intel CPU only needs to know how to issue PECI commands to that device.
> What address it is at and how the bus delivers the command to the node are irrelevant details.
> How do you plan to describe the PECI bus in a dts?
> Can I use the same dt bindings for the Intel CPU's PECI interface for both Aspeed and Nuvoton?
>

HW dependent parameters will be added into dts. All SoCs has its own dt binding set so it couldn't
be shared between Aspeed and Nuvoton. 

>>>>>> - Ping()
>>>>>> - GetDIB()
>>>>>> - GetTemp()
>>>>>> - RdPkgConfig()
>>>>>> - WrPkgConfig()
>>>>>> - RdIAMSR()
>>>>>> - RdPCIConfigLocal()
>>>>>> - WrPCIConfigLocal()
>>>>>> - RdPCIConfig()
>>>>>>
>>>>>> Also, through the hwmon driver, these temperature monitoring features would be provided:
>>>>>>
>>>>>> - Core temperature
>>>>>> - DTS thermal margin (hysteresis)
>>>>>> - DDR DIMM temperature
>>>>>> - etc.
>>>>>
>>>>> Sweet!
>>>>>
>>>>>>
>>>>>> Patches will come in to upstream when it is ready.
>>>>>>
>>>>>> Cheers,
>>>>>> Jae
>>>>>
>>>>> For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn't go anywhere:
>>>>>
>>>>> https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html
>>>>>
>>>>> thx - brad
>>>>>
>>>>
>>>> My implementation is also heavily based on the Aspeed SDK driver 
>>>> but modified a lot to provide more suitable functionality for openbmc project. Hopefully, it could be introduced soon.
>>>>
>>>> thx,
>>>> Jae
>>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-11-01 16:45           ` Jae Hyun Yoo
@ 2017-11-01 17:27             ` Rick Altherr
  0 siblings, 0 replies; 19+ messages in thread
From: Rick Altherr @ 2017-11-01 17:27 UTC (permalink / raw)
  To: Jae Hyun Yoo; +Cc: Brad Bishop, OpenBMC Maillist, ed.tanous, james.feist

On Wed, Nov 1, 2017 at 9:45 AM, Jae Hyun Yoo
<jae.hyun.yoo@linux.intel.com> wrote:
>>>>> On Tue, Oct 31, 2017 at 9:26 AM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
>>>>>>> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com> wrote:
>>>>>>>
>>>>>>> Hi Dave,
>>>>>>>
>>>>>>> I'm currently working on PECI kernel driver
>>>>>>
>>>>>> I'm curious about the high level structure.  I'm sure others are as well.
>>>>>> Anything you can share would be informative and appreciated!
>>>>>>
>>>>>> A couple questions that popped into my head:
>>>>>>
>>>>>>  - Would there be a new Linux bus type or core framework for this?
>>>>>>  - How many drivers would there be for a full stack.  Something like this?
>>>>>>      - client? (hwmon, for example)
>>>>>>      - core? (common code)
>>>>>>      - bmc specific implementation? (aspeed, nuvoton, emulated
>>>>>> differences)
>>>>>>  - Have you considered using DT bindings and/or how they would look?
>>>>>>
>>>>>> These questions are motivated by the recent upstreaming experience
>>>>>> with FSI (flexible support interface) where a similar structure was used.
>>>>>> FSI on POWER feels similar to PECI in terms of usage and features
>>>>>> so I thought I'd just throw this out there as a possible reference point to consider.
>>>>>
>>>>> PECI is using single-wired interface which is different from other
>>>>> popular interfaces such as I2C and MTD, and therefore it doesn't
>>>>> have any common core framework in kernel so I'm adding the PECI
>>>>> main contorl driver as an misc type and the other one into hwmon subsystem.
>>>>> The reason why I seperate the implementation into two drivers is,
>>>>> PECI can be used not only for temperature monitoring but also for
>>>>> platform manageability, processor diagnostics and failure analysis,
>>>>> so the misc control driver will be used as a common PECI driver for
>>>>> all those purposes flexibly and the hwmon subsystem driver will use
>>>>> the common PECI driver just for temperature monitoring. These
>>>>> drivers will be BMC specific implementation which support Aspeed
>>>>> shipset only. Support for Nuvoton chipset was not considered in my
>>>>> implementation because Nuvoton has different HW and register
>>>>> scheme, also Nuvoton already has its dedicated driver
>>>>> implementation in hwmon subsystem for their each chipset variant (nct6683.c  nct6775.c  nct7802.c).
>>>>>
>>>>
>>>> Nuvoton is starting to submit support for their Poleg BMC to
>>>> upstream (http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/538226.html).
>>>> This BMC includes a PECI controller similar to the Aspeed design but
>>>> with a different register layout.  At a minimum, the misc driver
>>>> needs to support multiple backend drivers to allow Nuvoton to
>>>> implement the same interface.  The chips you listed that are already
>>>> in hwmon are for Nuvoton's SuperIOs, not their BMCs.
>>>>
>>>
>>> Thanks for your pointing out of the current Poleg BMC upstreaming. I didn't know about that before.
>>> Ideally, it would be great if we support all BMC PECI controllers in a
>>> single device driver but we should consider some dependencies such as
>>> SCU register setting in bootloader, clock setting for the PECI controller HW block and etc that would vary on each BMC controller chipset.
>>> Usually, these dependencies should be covered by kernel config and device tree settings.
>>> My thought is, each BMC controller should have its own PECI misc
>>> driver then we could selectively enable one by kernel configuration.
>>>
>>
>> Are you expecting each BMC controller's PECI misc driver to re-implement the device ioctls?
>> If I assume the misc device and ioctl implementation are shared, I can't see how adding a subsystem would be significantly more work.
>> Doing so would clarify what the boundaries are between controller implementation and protocol behavior.
>>
>
> Okay, I agreed. That is reasonable concern. At least, if possible, we should provide compatible
> ioctl set. I'll check its feasibility after getting Nuvoton's datasheet and their SDK.
>
>>>>>>> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>>>>>>>
>>>>>>> - low-level PECI xfer command
>>>>>>
>>>>>> Would a separate 'dev' driver similar to i2c-dev make sense here?  Just thinking out loud.
>>>>>>
>>>>>
>>>>> Yes, drivers will be seperated into two but it's hard to say that this way is similar to i2c-dev.
>>>>> It would have a bit different shape.
>>>>>
>>>>
>>>> I'm not terribly familiar with the PECI protocol.  I'll see about getting a copy of the spec.
>>>> From what I can find via searches, it looks like individual nodes on the bus are addressed similar to I2C.
>>>> I'd expect that to be similar to how i2c-dev is structured: a
>>>> kobject per master and a kobject per address on the bus.  That way,
>>>> drivers can be bound to individual addresses. The misc driver would focus on exposing interacting with a specific address on the bus in a generic fashion.
>>>>
>>>
>>> As you said, it would be very useful if kernel has core bus framework
>>> like I2C, but current kernel doesn't have the core bus framework for
>>> PECI, and it would be a hugh project itself if we are going to implement one.
>>
>> Really?  IBM did so for FSI and it really helped with understanding the design.
>>
>
> Yes, IBM did really great work on FSI, Kudos to them.
>
>>> Generally, PECI bus topology is very simple unlike I2C. Usually in a
>>> single system, there is only one BMC controller and it has connections
>>> with CPUs, that's all. I don't see an advantage of using core bus framework on this simple interface.
>>>
>>
>> Ideally, an hwmon driver for PECI on an Intel CPU only needs to know how to issue PECI commands to that device.
>> What address it is at and how the bus delivers the command to the node are irrelevant details.
>> How do you plan to describe the PECI bus in a dts?
>> Can I use the same dt bindings for the Intel CPU's PECI interface for both Aspeed and Nuvoton?
>>
>
> HW dependent parameters will be added into dts. All SoCs has its own dt binding set so it couldn't
> be shared between Aspeed and Nuvoton.
>

Each PECI controller will have HW dependent parameters for sure.  I
was asking about PECI endpoints such as the CPUs themselves.  How can
I decouple the dts describing a Xeon 6152 on the PECI bus (and the
corresponding driver) from the controller details?

>>>>>>> - Ping()
>>>>>>> - GetDIB()
>>>>>>> - GetTemp()
>>>>>>> - RdPkgConfig()
>>>>>>> - WrPkgConfig()
>>>>>>> - RdIAMSR()
>>>>>>> - RdPCIConfigLocal()
>>>>>>> - WrPCIConfigLocal()
>>>>>>> - RdPCIConfig()
>>>>>>>
>>>>>>> Also, through the hwmon driver, these temperature monitoring features would be provided:
>>>>>>>
>>>>>>> - Core temperature
>>>>>>> - DTS thermal margin (hysteresis)
>>>>>>> - DDR DIMM temperature
>>>>>>> - etc.
>>>>>>
>>>>>> Sweet!
>>>>>>
>>>>>>>
>>>>>>> Patches will come in to upstream when it is ready.
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Jae
>>>>>>
>>>>>> For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn't go anywhere:
>>>>>>
>>>>>> https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html
>>>>>>
>>>>>> thx - brad
>>>>>>
>>>>>
>>>>> My implementation is also heavily based on the Aspeed SDK driver
>>>>> but modified a lot to provide more suitable functionality for openbmc project. Hopefully, it could be introduced soon.
>>>>>
>>>>> thx,
>>>>> Jae
>>>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-24 11:12   ` David Müller (ELSOFT AG)
@ 2017-10-25 20:11     ` Rick Altherr
  0 siblings, 0 replies; 19+ messages in thread
From: Rick Altherr @ 2017-10-25 20:11 UTC (permalink / raw)
  To: David Müller (ELSOFT AG); +Cc: Tanous, Ed, OpenBMC

I'll gladly review patches that propose a new kernel subsystem.  Based
on Ed's comments, I expect it would be very similar to the i2c
subsystem.

On Tue, Oct 24, 2017 at 4:12 AM, David Müller (ELSOFT AG)
<d.mueller@elsoft.ch> wrote:
> Hello
>
> Tanous, Ed wrote:
>
>> What are you looking at doing with it?
>
> PECI seems to play a major role in the communication between the BMC and
> x86 CPUs, as a lot of the system control/status infrastructure is only
> accessible by the BMC through PECI.
>
> Possible use cases may be:
>
> - Get CPU temperature.
> - Get CPU machine check exception info.
> - After a cold boot, delay CPU start until BMC is up and running.
>
>
> Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-23 16:44 ` Tanous, Ed
  2017-10-23 19:02   ` Rick Altherr
@ 2017-10-24 11:12   ` David Müller (ELSOFT AG)
  2017-10-25 20:11     ` Rick Altherr
  1 sibling, 1 reply; 19+ messages in thread
From: David Müller (ELSOFT AG) @ 2017-10-24 11:12 UTC (permalink / raw)
  To: Tanous, Ed; +Cc: OpenBMC

Hello

Tanous, Ed wrote:

> What are you looking at doing with it?

PECI seems to play a major role in the communication between the BMC and
x86 CPUs, as a lot of the system control/status infrastructure is only
accessible by the BMC through PECI.

Possible use cases may be:

- Get CPU temperature.
- Get CPU machine check exception info.
- After a cold boot, delay CPU start until BMC is up and running.


Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-23 20:24         ` Tanous, Ed
@ 2017-10-23 20:40           ` Rick Altherr
  0 siblings, 0 replies; 19+ messages in thread
From: Rick Altherr @ 2017-10-23 20:40 UTC (permalink / raw)
  To: Tanous, Ed; +Cc: David Müller (ELSOFT AG), OpenBMC

On Mon, Oct 23, 2017 at 1:24 PM, Tanous, Ed <ed.tanous@intel.com> wrote:
>
>
>
>
> From: Rick Altherr [mailto:raltherr@google.com]
> Sent: Monday, October 23, 2017 12:48 PM
> To: Tanous, Ed <ed.tanous@intel.com>
> Cc: David Müller (ELSOFT AG) <d.mueller@elsoft.ch>; OpenBMC
> <openbmc@lists.ozlabs.org>
> Subject: Re: PECI API?
>
>
>
>
>
>
>
> On Mon, Oct 23, 2017 at 12:39 PM, Tanous, Ed <ed.tanous@intel.com> wrote:
>
> “Rather than a PECI API, would it make any sense to define a an abstract
> concept API, where one implementation of it has a PECI backend?”
>
>
>
> I don’t think so, but that’s partially why I asked about use cases.  PECI
> can be thought of a lot like SMBus, with some fancy protocol level features
> that make it easier to implement.  It’s a generic interface that can be used
> for any number of things, including temperature readings from processors and
> memory.  Our intention was to implement it as a device driver (/dev/peci),
> with (one or several) dbus daemons reading and pushing information to dbus
> using the existing openbmc interfaces (sensor, threshold, logging ect).
> There was also talk of implementing it as a hwmon driver, but I think that
> discussion was deferred, given that the number of “sensors” needs to be
> determined at runtime, and that didn’t seem to fit in the hwmon model.  This
> work is ongoing, so I don’t have a timeframe on completion or its level of
> robustness, but if there’s interest, I can probably push the WIP to a branch
> or a repo somewhere.
>
>
>
>
>
> hwmon has APIs for dynamically adding sensors at runtime.  For temps, volts,
> etc, I prefer an hwmon driver built atop a PECI subsystem.
>
>
>
> [Ed]  I was personally unaware that hwmon itself had that capability.  We
> had previously implemented a dynamically generated device tree overlay that
> accomplished some of that for LM75 sensors.  I will point my developers at
> it.
>
> “Is there a kernel subsystem for PECI defined upstream?  I'm not aware of a
> PECI device driver for Aspeed upstream.”
>
> I don’t believe there is one defined upstream, but I believe the first
> revision of the driver code we are using was derived from the Aspeed SDK.
>
>
>
> What happens when Nuvoton sends their driver?
>
>
>
> [Ed] ….. we work to build a common interface that meets everyone’s needs,
> while abstracting the hardware interfaces into the kernel.  Part of the
> issue with PECI is there are some userspace constructs (retries, framing,
> timing, ect) that are a part of the PECI specification, but could be pushed
> to either hardware abstractions, or userspace code.  Where they get
> implemented (for platforms I’ve been a part of) has thusfar been a matter of
> preference on the part of the developer.  We likely should get together a
> group of interested parties and see if we can come up with an interface that
> works for everyone.  I have not yet worked with a Nuvoton platform, so I’m
> sure I have quite a bit to learn in that space.
>
>

I know Google is interested in seeing this abstraction get developed
soon.  Ideally this abstraction goes in to upstream as part of the
same patch series as an Aspeed driver.

>
> -Ed
>
>
>
>
>
> -----Original Message-----
> From: openbmc [mailto:openbmc-bounces+ed.tanous=intel.com@lists.ozlabs.org]
> On Behalf Of Brad Bishop
> Sent: Monday, October 23, 2017 12:17 PM
> To: "David Müller (ELSOFT AG)" <d.mueller@elsoft.ch>
> Cc: OpenBMC <openbmc@lists.ozlabs.org>
> Subject: Re: PECI API?
>
>
>
>
>
>> On Oct 21, 2017, at 4:57 AM, David Müller (ELSOFT AG)
>> <d.mueller@elsoft.ch> wrote:
>
>>
>
>> Hello
>
>>
>
>> Is anyone working on an API definition for PECI?
>
>>
>
>>
>
>> Dave
>
>
>
> Full disclosure - the Wikipedia article is the extent of my background on
> PECI.
>
>
>
> Rather than a PECI API, would it make any sense to define a an abstract
> concept API, where one implementation of it has a PECI backend?
>
>
>
> My cursory glance at the Wikipedia article suggests PECI provides
> temperature readings (I’m sure it does much more) but the basic thought
> process I’ve outlined would allow control applications or metric gathering
> applications, etc to be re-used irrespective of where the data is coming
> from.
>
>
>
> -brad
>
>
>
>
>
> From: Rick Altherr [mailto:raltherr@google.com]
> Sent: Monday, October 23, 2017 12:02 PM
> To: Tanous, Ed <ed.tanous@intel.com>
> Cc: David Müller (ELSOFT AG) <d.mueller@elsoft.ch>; OpenBMC
> <openbmc@lists.ozlabs.org>
> Subject: Re: PECI API?
>
>
>
> Is there a kernel subsystem for PECI defined upstream?  I'm not aware of a
> PECI device driver for Aspeed upstream.
>
>
>
> On Mon, Oct 23, 2017 at 9:44 AM, Tanous, Ed <ed.tanous@intel.com> wrote:
>
> We had looked at building one, and had one prototyped for a simple
> read/write interface, but we were on the fence about whether such a low
> level control (PECI read/write) should be put on dbus at all for security
> reasons, especially when the drivers read/write API isn't that difficult to
> use.
>
> What are you looking at doing with it?
>
> -Ed
>
>
> -----Original Message-----
> From: openbmc [mailto:openbmc-bounces+ed.tanous=intel.com@lists.ozlabs.org]
> On Behalf Of David Müller (ELSOFT AG)
> Sent: Saturday, October 21, 2017 1:57 AM
> To: OpenBMC <openbmc@lists.ozlabs.org>
> Subject: PECI API?
>
> Hello
>
> Is anyone working on an API definition for PECI?
>
>
> Dave
>
>
>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: PECI API?
  2017-10-23 19:47       ` Rick Altherr
@ 2017-10-23 20:24         ` Tanous, Ed
  2017-10-23 20:40           ` Rick Altherr
  0 siblings, 1 reply; 19+ messages in thread
From: Tanous, Ed @ 2017-10-23 20:24 UTC (permalink / raw)
  To: Rick Altherr; +Cc: David Müller (ELSOFT AG), OpenBMC

[-- Attachment #1: Type: text/plain, Size: 5416 bytes --]



From: Rick Altherr [mailto:raltherr@google.com]
Sent: Monday, October 23, 2017 12:48 PM
To: Tanous, Ed <ed.tanous@intel.com>
Cc: David Müller (ELSOFT AG) <d.mueller@elsoft.ch>; OpenBMC <openbmc@lists.ozlabs.org>
Subject: Re: PECI API?



On Mon, Oct 23, 2017 at 12:39 PM, Tanous, Ed <ed.tanous@intel.com<mailto:ed.tanous@intel.com>> wrote:

“Rather than a PECI API, would it make any sense to define a an abstract concept API, where one implementation of it has a PECI backend?”

I don’t think so, but that’s partially why I asked about use cases.  PECI can be thought of a lot like SMBus, with some fancy protocol level features that make it easier to implement.  It’s a generic interface that can be used for any number of things, including temperature readings from processors and memory.  Our intention was to implement it as a device driver (/dev/peci), with (one or several) dbus daemons reading and pushing information to dbus using the existing openbmc interfaces (sensor, threshold, logging ect).  There was also talk of implementing it as a hwmon driver, but I think that discussion was deferred, given that the number of “sensors” needs to be determined at runtime, and that didn’t seem to fit in the hwmon model.  This work is ongoing, so I don’t have a timeframe on completion or its level of robustness, but if there’s interest, I can probably push the WIP to a branch or a repo somewhere.


hwmon has APIs for dynamically adding sensors at runtime.  For temps, volts, etc, I prefer an hwmon driver built atop a PECI subsystem.

[Ed]  I was personally unaware that hwmon itself had that capability.  We had previously implemented a dynamically generated device tree overlay that accomplished some of that for LM75 sensors.  I will point my developers at it.
“Is there a kernel subsystem for PECI defined upstream?  I'm not aware of a PECI device driver for Aspeed upstream.”
I don’t believe there is one defined upstream, but I believe the first revision of the driver code we are using was derived from the Aspeed SDK.

What happens when Nuvoton sends their driver?

[Ed] ….. we work to build a common interface that meets everyone’s needs, while abstracting the hardware interfaces into the kernel.  Part of the issue with PECI is there are some userspace constructs (retries, framing, timing, ect) that are a part of the PECI specification, but could be pushed to either hardware abstractions, or userspace code.  Where they get implemented (for platforms I’ve been a part of) has thusfar been a matter of preference on the part of the developer.  We likely should get together a group of interested parties and see if we can come up with an interface that works for everyone.  I have not yet worked with a Nuvoton platform, so I’m sure I have quite a bit to learn in that space.

-Ed



-----Original Message-----
From: openbmc [mailto:openbmc-bounces+ed.tanous<mailto:openbmc-bounces%2Bed.tanous>=intel.com@lists.ozlabs.org<mailto:intel.com@lists.ozlabs.org>] On Behalf Of Brad Bishop
Sent: Monday, October 23, 2017 12:17 PM
To: "David Müller (ELSOFT AG)" <d.mueller@elsoft.ch<mailto:d.mueller@elsoft.ch>>
Cc: OpenBMC <openbmc@lists.ozlabs.org<mailto:openbmc@lists.ozlabs.org>>
Subject: Re: PECI API?





> On Oct 21, 2017, at 4:57 AM, David Müller (ELSOFT AG) <d.mueller@elsoft.ch<mailto:d.mueller@elsoft.ch>> wrote:

>

> Hello

>

> Is anyone working on an API definition for PECI?

>

>

> Dave



Full disclosure - the Wikipedia article is the extent of my background on PECI.



Rather than a PECI API, would it make any sense to define a an abstract concept API, where one implementation of it has a PECI backend?



My cursory glance at the Wikipedia article suggests PECI provides temperature readings (I’m sure it does much more) but the basic thought process I’ve outlined would allow control applications or metric gathering applications, etc to be re-used irrespective of where the data is coming from.



-brad


From: Rick Altherr [mailto:raltherr@google.com<mailto:raltherr@google.com>]
Sent: Monday, October 23, 2017 12:02 PM
To: Tanous, Ed <ed.tanous@intel.com<mailto:ed.tanous@intel.com>>
Cc: David Müller (ELSOFT AG) <d.mueller@elsoft.ch<mailto:d.mueller@elsoft.ch>>; OpenBMC <openbmc@lists.ozlabs.org<mailto:openbmc@lists.ozlabs.org>>
Subject: Re: PECI API?

Is there a kernel subsystem for PECI defined upstream?  I'm not aware of a PECI device driver for Aspeed upstream.

On Mon, Oct 23, 2017 at 9:44 AM, Tanous, Ed <ed.tanous@intel.com<mailto:ed.tanous@intel.com>> wrote:
We had looked at building one, and had one prototyped for a simple read/write interface, but we were on the fence about whether such a low level control (PECI read/write) should be put on dbus at all for security reasons, especially when the drivers read/write API isn't that difficult to use.

What are you looking at doing with it?

-Ed

-----Original Message-----
From: openbmc [mailto:openbmc-bounces+ed.tanous<mailto:openbmc-bounces%2Bed.tanous>=intel.com@lists.ozlabs.org<mailto:intel.com@lists.ozlabs.org>] On Behalf Of David Müller (ELSOFT AG)
Sent: Saturday, October 21, 2017 1:57 AM
To: OpenBMC <openbmc@lists.ozlabs.org<mailto:openbmc@lists.ozlabs.org>>
Subject: PECI API?

Hello

Is anyone working on an API definition for PECI?


Dave



[-- Attachment #2: Type: text/html, Size: 14781 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-23 19:39     ` Tanous, Ed
@ 2017-10-23 19:47       ` Rick Altherr
  2017-10-23 20:24         ` Tanous, Ed
  0 siblings, 1 reply; 19+ messages in thread
From: Rick Altherr @ 2017-10-23 19:47 UTC (permalink / raw)
  To: Tanous, Ed; +Cc: David Müller (ELSOFT AG), OpenBMC

[-- Attachment #1: Type: text/plain, Size: 4063 bytes --]

On Mon, Oct 23, 2017 at 12:39 PM, Tanous, Ed <ed.tanous@intel.com> wrote:

> “Rather than a PECI API, would it make any sense to define a an abstract
> concept API, where one implementation of it has a PECI backend?”
>
>
>
> I don’t think so, but that’s partially why I asked about use cases.  PECI
> can be thought of a lot like SMBus, with some fancy protocol level features
> that make it easier to implement.  It’s a generic interface that can be
> used for any number of things, including temperature readings from
> processors and memory.  Our intention was to implement it as a device
> driver (/dev/peci), with (one or several) dbus daemons reading and pushing
> information to dbus using the existing openbmc interfaces (sensor,
> threshold, logging ect).  There was also talk of implementing it as a hwmon
> driver, but I think that discussion was deferred, given that the number of
> “sensors” needs to be determined at runtime, and that didn’t seem to fit in
> the hwmon model.  This work is ongoing, so I don’t have a timeframe on
> completion or its level of robustness, but if there’s interest, I can
> probably push the WIP to a branch or a repo somewhere.
>
>
>

hwmon has APIs for dynamically adding sensors at runtime.  For temps,
volts, etc, I prefer an hwmon driver built atop a PECI subsystem.


> “Is there a kernel subsystem for PECI defined upstream?  I'm not aware of
> a PECI device driver for Aspeed upstream.”
>
> I don’t believe there is one defined upstream, but I believe the first
> revision of the driver code we are using was derived from the Aspeed SDK.
>

What happens when Nuvoton sends their driver?


>
>
> -Ed
>
>
>
>
>
> -----Original Message-----
> From: openbmc [mailto:openbmc-bounces+ed.tanous=intel.com@lists.ozlabs.org]
> On Behalf Of Brad Bishop
> Sent: Monday, October 23, 2017 12:17 PM
> To: "David Müller (ELSOFT AG)" <d.mueller@elsoft.ch>
> Cc: OpenBMC <openbmc@lists.ozlabs.org>
> Subject: Re: PECI API?
>
>
>
>
>
> > On Oct 21, 2017, at 4:57 AM, David Müller (ELSOFT AG) <
> d.mueller@elsoft.ch> wrote:
>
> >
>
> > Hello
>
> >
>
> > Is anyone working on an API definition for PECI?
>
> >
>
> >
>
> > Dave
>
>
>
> Full disclosure - the Wikipedia article is the extent of my background on
> PECI.
>
>
>
> Rather than a PECI API, would it make any sense to define a an abstract
> concept API, where one implementation of it has a PECI backend?
>
>
>
> My cursory glance at the Wikipedia article suggests PECI provides
> temperature readings (I’m sure it does much more) but the basic thought
> process I’ve outlined would allow control applications or metric gathering
> applications, etc to be re-used irrespective of where the data is coming
> from.
>
>
>
> -brad
>
>
>
>
>
> *From:* Rick Altherr [mailto:raltherr@google.com]
> *Sent:* Monday, October 23, 2017 12:02 PM
> *To:* Tanous, Ed <ed.tanous@intel.com>
> *Cc:* David Müller (ELSOFT AG) <d.mueller@elsoft.ch>; OpenBMC <
> openbmc@lists.ozlabs.org>
> *Subject:* Re: PECI API?
>
>
>
> Is there a kernel subsystem for PECI defined upstream?  I'm not aware of a
> PECI device driver for Aspeed upstream.
>
>
>
> On Mon, Oct 23, 2017 at 9:44 AM, Tanous, Ed <ed.tanous@intel.com> wrote:
>
> We had looked at building one, and had one prototyped for a simple
> read/write interface, but we were on the fence about whether such a low
> level control (PECI read/write) should be put on dbus at all for security
> reasons, especially when the drivers read/write API isn't that difficult to
> use.
>
> What are you looking at doing with it?
>
> -Ed
>
>
> -----Original Message-----
> From: openbmc [mailto:openbmc-bounces+ed.tanous=intel.com@lists.ozlabs.org]
> On Behalf Of David Müller (ELSOFT AG)
> Sent: Saturday, October 21, 2017 1:57 AM
> To: OpenBMC <openbmc@lists.ozlabs.org>
> Subject: PECI API?
>
> Hello
>
> Is anyone working on an API definition for PECI?
>
>
> Dave
>
>
>

[-- Attachment #2: Type: text/html, Size: 9736 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: PECI API?
  2017-10-23 19:02   ` Rick Altherr
@ 2017-10-23 19:39     ` Tanous, Ed
  2017-10-23 19:47       ` Rick Altherr
  0 siblings, 1 reply; 19+ messages in thread
From: Tanous, Ed @ 2017-10-23 19:39 UTC (permalink / raw)
  To: Rick Altherr; +Cc: David Müller (ELSOFT AG), OpenBMC

[-- Attachment #1: Type: text/plain, Size: 3663 bytes --]

“Rather than a PECI API, would it make any sense to define a an abstract concept API, where one implementation of it has a PECI backend?”

I don’t think so, but that’s partially why I asked about use cases.  PECI can be thought of a lot like SMBus, with some fancy protocol level features that make it easier to implement.  It’s a generic interface that can be used for any number of things, including temperature readings from processors and memory.  Our intention was to implement it as a device driver (/dev/peci), with (one or several) dbus daemons reading and pushing information to dbus using the existing openbmc interfaces (sensor, threshold, logging ect).  There was also talk of implementing it as a hwmon driver, but I think that discussion was deferred, given that the number of “sensors” needs to be determined at runtime, and that didn’t seem to fit in the hwmon model.  This work is ongoing, so I don’t have a timeframe on completion or its level of robustness, but if there’s interest, I can probably push the WIP to a branch or a repo somewhere.

“Is there a kernel subsystem for PECI defined upstream?  I'm not aware of a PECI device driver for Aspeed upstream.”
I don’t believe there is one defined upstream, but I believe the first revision of the driver code we are using was derived from the Aspeed SDK.

-Ed



-----Original Message-----
From: openbmc [mailto:openbmc-bounces+ed.tanous=intel.com@lists.ozlabs.org] On Behalf Of Brad Bishop
Sent: Monday, October 23, 2017 12:17 PM
To: "David Müller (ELSOFT AG)" <d.mueller@elsoft.ch>
Cc: OpenBMC <openbmc@lists.ozlabs.org>
Subject: Re: PECI API?





> On Oct 21, 2017, at 4:57 AM, David Müller (ELSOFT AG) <d.mueller@elsoft.ch<mailto:d.mueller@elsoft.ch>> wrote:

>

> Hello

>

> Is anyone working on an API definition for PECI?

>

>

> Dave



Full disclosure - the Wikipedia article is the extent of my background on PECI.



Rather than a PECI API, would it make any sense to define a an abstract concept API, where one implementation of it has a PECI backend?



My cursory glance at the Wikipedia article suggests PECI provides temperature readings (I’m sure it does much more) but the basic thought process I’ve outlined would allow control applications or metric gathering applications, etc to be re-used irrespective of where the data is coming from.



-brad


From: Rick Altherr [mailto:raltherr@google.com]
Sent: Monday, October 23, 2017 12:02 PM
To: Tanous, Ed <ed.tanous@intel.com>
Cc: David Müller (ELSOFT AG) <d.mueller@elsoft.ch>; OpenBMC <openbmc@lists.ozlabs.org>
Subject: Re: PECI API?

Is there a kernel subsystem for PECI defined upstream?  I'm not aware of a PECI device driver for Aspeed upstream.

On Mon, Oct 23, 2017 at 9:44 AM, Tanous, Ed <ed.tanous@intel.com<mailto:ed.tanous@intel.com>> wrote:
We had looked at building one, and had one prototyped for a simple read/write interface, but we were on the fence about whether such a low level control (PECI read/write) should be put on dbus at all for security reasons, especially when the drivers read/write API isn't that difficult to use.

What are you looking at doing with it?

-Ed

-----Original Message-----
From: openbmc [mailto:openbmc-bounces+ed.tanous<mailto:openbmc-bounces%2Bed.tanous>=intel.com@lists.ozlabs.org<mailto:intel.com@lists.ozlabs.org>] On Behalf Of David Müller (ELSOFT AG)
Sent: Saturday, October 21, 2017 1:57 AM
To: OpenBMC <openbmc@lists.ozlabs.org<mailto:openbmc@lists.ozlabs.org>>
Subject: PECI API?

Hello

Is anyone working on an API definition for PECI?


Dave


[-- Attachment #2: Type: text/html, Size: 9192 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-21  8:57 David Müller (ELSOFT AG)
  2017-10-23 16:44 ` Tanous, Ed
@ 2017-10-23 19:17 ` Brad Bishop
  1 sibling, 0 replies; 19+ messages in thread
From: Brad Bishop @ 2017-10-23 19:17 UTC (permalink / raw)
  To: "David Müller (ELSOFT AG)"; +Cc: OpenBMC


> On Oct 21, 2017, at 4:57 AM, David Müller (ELSOFT AG) <d.mueller@elsoft.ch> wrote:
> 
> Hello
> 
> Is anyone working on an API definition for PECI?
> 
> 
> Dave

Full disclosure - the Wikipedia article is the extent of my
background on PECI.

Rather than a PECI API, would it make any sense to define a
an abstract concept API, where one implementation of it has a
PECI backend?

My cursory glance at the Wikipedia article suggests PECI provides
temperature readings (I’m sure it does much more) but the basic
thought process I’ve outlined would allow control applications
or metric gathering applications, etc to be re-used irrespective
of where the data is coming from.

-brad

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: PECI API?
  2017-10-23 16:44 ` Tanous, Ed
@ 2017-10-23 19:02   ` Rick Altherr
  2017-10-23 19:39     ` Tanous, Ed
  2017-10-24 11:12   ` David Müller (ELSOFT AG)
  1 sibling, 1 reply; 19+ messages in thread
From: Rick Altherr @ 2017-10-23 19:02 UTC (permalink / raw)
  To: Tanous, Ed; +Cc: David Müller (ELSOFT AG), OpenBMC

[-- Attachment #1: Type: text/plain, Size: 905 bytes --]

Is there a kernel subsystem for PECI defined upstream?  I'm not aware of a
PECI device driver for Aspeed upstream.

On Mon, Oct 23, 2017 at 9:44 AM, Tanous, Ed <ed.tanous@intel.com> wrote:

> We had looked at building one, and had one prototyped for a simple
> read/write interface, but we were on the fence about whether such a low
> level control (PECI read/write) should be put on dbus at all for security
> reasons, especially when the drivers read/write API isn't that difficult to
> use.
>
> What are you looking at doing with it?
>
> -Ed
>
> -----Original Message-----
> From: openbmc [mailto:openbmc-bounces+ed.tanous=intel.com@lists.ozlabs.org]
> On Behalf Of David Müller (ELSOFT AG)
> Sent: Saturday, October 21, 2017 1:57 AM
> To: OpenBMC <openbmc@lists.ozlabs.org>
> Subject: PECI API?
>
> Hello
>
> Is anyone working on an API definition for PECI?
>
>
> Dave
>

[-- Attachment #2: Type: text/html, Size: 1496 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: PECI API?
  2017-10-21  8:57 David Müller (ELSOFT AG)
@ 2017-10-23 16:44 ` Tanous, Ed
  2017-10-23 19:02   ` Rick Altherr
  2017-10-24 11:12   ` David Müller (ELSOFT AG)
  2017-10-23 19:17 ` Brad Bishop
  1 sibling, 2 replies; 19+ messages in thread
From: Tanous, Ed @ 2017-10-23 16:44 UTC (permalink / raw)
  To: David Müller (ELSOFT AG), OpenBMC

We had looked at building one, and had one prototyped for a simple read/write interface, but we were on the fence about whether such a low level control (PECI read/write) should be put on dbus at all for security reasons, especially when the drivers read/write API isn't that difficult to use.

What are you looking at doing with it?

-Ed

-----Original Message-----
From: openbmc [mailto:openbmc-bounces+ed.tanous=intel.com@lists.ozlabs.org] On Behalf Of David Müller (ELSOFT AG)
Sent: Saturday, October 21, 2017 1:57 AM
To: OpenBMC <openbmc@lists.ozlabs.org>
Subject: PECI API?

Hello

Is anyone working on an API definition for PECI?


Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

* PECI API?
@ 2017-10-21  8:57 David Müller (ELSOFT AG)
  2017-10-23 16:44 ` Tanous, Ed
  2017-10-23 19:17 ` Brad Bishop
  0 siblings, 2 replies; 19+ messages in thread
From: David Müller (ELSOFT AG) @ 2017-10-21  8:57 UTC (permalink / raw)
  To: OpenBMC

Hello

Is anyone working on an API definition for PECI?


Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2017-11-01 17:27 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-23 17:03 PECI API? Jae Hyun Yoo
2017-10-30 17:45 ` Patrick Venture
2017-10-30 19:21 ` Brad Bishop
2017-10-31 16:26   ` Jae Hyun Yoo
2017-10-31 18:29     ` Rick Altherr
2017-10-31 21:50       ` Jae Hyun Yoo
2017-10-31 22:07         ` Rick Altherr
2017-11-01 16:45           ` Jae Hyun Yoo
2017-11-01 17:27             ` Rick Altherr
  -- strict thread matches above, loose matches on Subject: below --
2017-10-21  8:57 David Müller (ELSOFT AG)
2017-10-23 16:44 ` Tanous, Ed
2017-10-23 19:02   ` Rick Altherr
2017-10-23 19:39     ` Tanous, Ed
2017-10-23 19:47       ` Rick Altherr
2017-10-23 20:24         ` Tanous, Ed
2017-10-23 20:40           ` Rick Altherr
2017-10-24 11:12   ` David Müller (ELSOFT AG)
2017-10-25 20:11     ` Rick Altherr
2017-10-23 19:17 ` Brad Bishop

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.