openbmc.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com>
To: Lee Jones <lee.jones@linaro.org>
Cc: openbmc@lists.ozlabs.org, Andrew Jeffery <andrew@aj.id.au>,
	James Feist <james.feist@linux.intel.com>,
	Jason M Biils <jason.m.bills@linux.intel.com>,
	Joel Stanley <joel@jms.id.au>,
	Vernon Mauery <vernon.mauery@linux.intel.com>
Subject: Re: [PATCH linux-next v5 08/13] drivers/mfd: Add PECI client mfd driver
Date: Thu, 5 Jul 2018 09:33:29 -0700	[thread overview]
Message-ID: <7fecc267-eef2-e4ad-bb74-14c3e8e5877b@linux.intel.com> (raw)
In-Reply-To: <20180704064130.GB20176@dell>

Hi Lee,

Thanks for your review. Please see my inline answers.

On 7/3/2018 11:41 PM, Lee Jones wrote:
> On Mon, 18 Jun 2018, Jae Hyun Yoo wrote:
> 
>> On 6/17/2018 10:59 PM, Lee Jones wrote:
>>> On Thu, 14 Jun 2018, Jae Hyun Yoo wrote:
>>>
>>>> Thanks for the review. Please see my inline answers.
>>>>
>>>> On 6/12/2018 11:30 PM, Lee Jones wrote:
>>>>> On Fri, 01 Jun 2018, Jae Hyun Yoo wrote:
>>>>>
>>>>>> This commit adds PECI client mfd driver.
>>>>>>
>>>>>> Signed-off-by: Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com>
>>>>>> Cc: Andrew Jeffery <andrew@aj.id.au>
>>>>>> Cc: James Feist <james.feist@linux.intel.com>
>>>>>> Cc: Jason M Biils <jason.m.bills@linux.intel.com>
>>>>>> Cc: Joel Stanley <joel@jms.id.au>
>>>>>> Cc: Vernon Mauery <vernon.mauery@linux.intel.com>
>>>>>> ---
>>>>>>     drivers/mfd/Kconfig             |  11 ++
>>>>>>     drivers/mfd/Makefile            |   1 +
>>>>>>     drivers/mfd/peci-client.c       | 205 ++++++++++++++++++++++++++++++++
>>>>>>     include/linux/mfd/peci-client.h |  60 ++++++++++
>>>>>>     4 files changed, 277 insertions(+)
>>>>>>     create mode 100644 drivers/mfd/peci-client.c
>>>>>>     create mode 100644 include/linux/mfd/peci-client.h
> 
> [...]
> 
>>>>>> +/**
>>>>>> + * Architectures other than x86 cannot include the header file so define these
>>>>>> + * at here. These are needed for detecting type of client x86 CPUs behind a PECI
>>>>>> + * connection.
>>>>>> + */
>>>>>> +#define INTEL_FAM6_HASWELL_X   0x3F
>>>>>> +#define INTEL_FAM6_BROADWELL_X 0x4F
>>>>>> +#define INTEL_FAM6_SKYLAKE_X   0x55
>>>>>> +#endif
>>>>>> +
>>>>>> +#define LOWER_NIBBLE_MASK      GENMASK(3, 0)
>>>>>> +#define UPPER_NIBBLE_MASK      GENMASK(7, 4)
>>>>>> +
>>>>>> +#define CPU_ID_MODEL_MASK      GENMASK(7, 4)
>>>>>> +#define CPU_ID_FAMILY_MASK     GENMASK(11, 8)
>>>>>> +#define CPU_ID_EXT_MODEL_MASK  GENMASK(19, 16)
>>>>>> +#define CPU_ID_EXT_FAMILY_MASK GENMASK(27, 20)
>>>>>
>>>>> In the case of X86 based devices, are these being redefined?
>>>>>
>>>>
>>>> No define even in x86 arch build. These masks just for PECI use cases.
>>>> Also, the CPUs in this implementation is not for local CPUs, but for
>>>> remote CPUs which are connected through PECI interface. It also means
>>>> that this code can be running on non-x86 kernel too.
>>>
>>> This is starting to sound like a 'remoteproc' driver, no?
>>
>> This is driver for remote Intel CPUs that are behind in PECI connection.
> 
> Sounds like 'remoteproc' to me.
> 

No. It's not a remoteproc. The remote Intel CPUs means host server CPUs
that are running a completely separated OS. This implementation is for
BMC (Board Management Controllers) kernel for monitoring host server
machine's Intel CPU, not for multiprocessing.

>>>>>> +enum cpu_gens {
>>>>>> +	CPU_GEN_HSX = 0, /* Haswell Xeon */
>>>>>> +	CPU_GEN_BRX,     /* Broadwell Xeon */
>>>>>> +	CPU_GEN_SKX,     /* Skylake Xeon */
>>>>>> +};
>>>>>> +
>>>>>> +static struct mfd_cell peci_functions[] = {
>>>>>> +	{
>>>>>> +		.name = "peci-cputemp",
>>>>>> +		.of_compatible = "intel,peci-cputemp",
>>>>>> +	},
>>>>>> +	{
>>>>>> +		.name = "peci-dimmtemp",
>>>>>> +		.of_compatible = "intel,peci-dimmtemp",
>>>>>> +	},
>>>>>> +};
>>>>>
>>>>> A couple of temp sensors?  This isn't looking very MFD-like.
>>>>>
>>>>> What makes this an MFD?
>>>>>
>>>>
>>>> Currently it has only a couple of temp sensors but it's just one of
>>>> PECI function. Other PECI functions will be added later so it's the
>>>> reason why it should be an MFD. Please see the following PECI sideband
>>>> functions that I introduced in the cover letter of this patch set.
>>>>
>>>> * Processor and DRAM thermal management
>>>> * Platform Manageability
>>>> * Processor Interface Tuning and Diagnostics
>>>> * Failure Analysis
>>>
>>> Are these all devices in their own right?
>>>
>>> Will they each have drivers associated with them?
>>>
>>> Is there a datasheet for this PECI device?
>>>
>>
>> Temperature monitoring driver which I'm trying to upstream should be
>> added into hwmon subsystem, but other function drivers should be added
>> as a platform driver or as a something else, so it is the reason why I
>> made this MFD driver because these function drivers are separated but
>> use the same device - the remote CPU - actually. Datasheet is available
>> through http://www.intel.com/design/literature.htm with an NDA process.
>> Intel is providing the datasheet to HW vendors.
> 
> I logged in an searched for "PECI".  These were my results:
> 
>    https://i.imgur.com/y86S96I.png
> 

You may need a CNDA:

https://www.intel.com/content/www/us/en/forms/design/registration-privileged.html

Thanks,

Jae

  reply	other threads:[~2018-07-05 16:33 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-01 18:22 [PATCH linux-next v5 08/13] drivers/mfd: Add PECI client mfd driver Jae Hyun Yoo
2018-06-13  6:30 ` Lee Jones
2018-06-14 17:48   ` Jae Hyun Yoo
2018-06-18  5:59     ` Lee Jones
2018-06-18 17:09       ` Jae Hyun Yoo
2018-07-04  6:41         ` Lee Jones
2018-07-05 16:33           ` Jae Hyun Yoo [this message]
2018-07-04  6:42         ` Lee Jones
2018-07-05 16:39           ` Jae Hyun Yoo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7fecc267-eef2-e4ad-bb74-14c3e8e5877b@linux.intel.com \
    --to=jae.hyun.yoo@linux.intel.com \
    --cc=andrew@aj.id.au \
    --cc=james.feist@linux.intel.com \
    --cc=jason.m.bills@linux.intel.com \
    --cc=joel@jms.id.au \
    --cc=lee.jones@linaro.org \
    --cc=openbmc@lists.ozlabs.org \
    --cc=vernon.mauery@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).