linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: jae.hyun.yoo@linux.intel.com (Jae Hyun Yoo)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v2 0/8] PECI device driver introduction
Date: Tue, 6 Mar 2018 11:21:57 -0800	[thread overview]
Message-ID: <94a31519-8009-b131-4d9e-95631b789931@linux.intel.com> (raw)
In-Reply-To: <20180306124014.GB13950@amd>

Hi Pavel,

Please see my answer inline.

On 3/6/2018 4:40 AM, Pavel Machek wrote:
> Hi!
> 
>> Introduction of the Platform Environment Control Interface (PECI) bus
>> device driver. PECI is a one-wire bus interface that provides a
>> communication channel between Intel processor and chipset components to
>> external monitoring or control devices. PECI is designed to support the
>> following sideband functions:
>>
>> * Processor and DRAM thermal management
>>    - Processor fan speed control is managed by comparing Digital Thermal
>>      Sensor (DTS) thermal readings acquired via PECI against the
>>      processor-specific fan speed control reference point, or TCONTROL.
>>      Both TCONTROL and DTS thermal readings are accessible via the processor
>>      PECI client. These variables are referenced to a common temperature,
>>      the TCC activation point, and are both defined as negative offsets from
>>      that reference.
>>    - PECI based access to the processor package configuration space provides
>>      a means for Baseboard Management Controllers (BMC) or other platform
>>      management devices to actively manage the processor and memory power
>>      and thermal features.
>>
>> * Platform Manageability
>>    - Platform manageability functions including thermal, power, and error
>>      monitoring. Note that platform 'power' management includes monitoring
>>      and control for both the processor and DRAM subsystem to assist with
>>      data center power limiting.
>>    - PECI allows read access to certain error registers in the processor MSR
>>      space and status monitoring registers in the PCI configuration space
>>      within the processor and downstream devices.
>>    - PECI permits writes to certain registers in the processor PCI
>>      configuration space.
>>
>> * Processor Interface Tuning and Diagnostics
>>    - Processor interface tuning and diagnostics capabilities
>>      (Intel(c) Interconnect BIST). The processors Intel(c) Interconnect
>>      Built In Self Test (Intel(c) IBIST) allows for infield diagnostic
>>      capabilities in the Intel UPI and memory controller interfaces. PECI
>>      provides a port to execute these diagnostics via its PCI Configuration
>>      read and write capabilities.
>>
>> * Failure Analysis
>>    - Output the state of the processor after a failure for analysis via
>>      Crashdump.
>>
>> PECI uses a single wire for self-clocking and data transfer. The bus
>> requires no additional control lines. The physical layer is a self-clocked
>> one-wire bus that begins each bit with a driven, rising edge from an idle
>> level near zero volts. The duration of the signal driven high depends on
>> whether the bit value is a logic '0' or logic '1'. PECI also includes
>> variable data transfer rate established with every message. In this way,
>> it is highly flexible even though underlying logic is simple.
>>
>> The interface design was optimized for interfacing to Intel processor and
>> chipset components in both single processor and multiple processor
>> environments. The single wire interface provides low board routing
>> overhead for the multiple load connections in the congested routing area
>> near the processor and chipset components. Bus speed, error checking, and
>> low protocol overhead provides adequate link bandwidth and reliability to
>> transfer critical device operating conditions and configuration
>> information.
>>
>> This implementation provides the basic framework to add PECI extensions
>> to the Linux bus and device models. A hardware specific 'Adapter' driver
>> can be attached to the PECI bus to provide sideband functions described
>> above. It is also possible to access all devices on an adapter from
>> userspace through the /dev interface. A device specific 'Client' driver
>> also can be attached to the PECI bus so each processor client's features
>> can be supported by the 'Client' driver through an adapter connection in
>> the bus. This patch set includes Aspeed 24xx/25xx PECI driver and a generic
>> PECI hwmon driver as the first implementation for both adapter and client
>> drivers on the PECI bus framework.
> 
> Ok, how does this interact with ACPI/SMM BIOS/Secure mode code? Does
> Linux _need_ to control the fan? Or is SMM BIOS capable of doing all
> the work itself and Linux has just read-only access for monitoring
> purposes?
> 

This driver is not for local CPUs which this driver is running on. 
Instead, this driver will be running on BMC (Baseboard Management 
Controller) kernel which is separated from the server machine. In this 
implementation, it provides just read-only access for monitoring the 
server's CPU and DIMM temperatures remotely through a PECI connection. 
The BMC can control fans according to the monitoring data if the BMC has 
a fan control interface and feature, but it depends on baseboard 
hardware and software designs.

Thanks,
Jae

> Pavel
> 
> -- (english) http://www.livejournal.com/~pavelmachek
> (cesky, pictures)
> http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
> 

      reply	other threads:[~2018-03-06 19:21 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-21 16:15 [PATCH v2 0/8] PECI device driver introduction Jae Hyun Yoo
2018-02-21 16:15 ` [PATCH v2 1/8] [PATCH 1/8] drivers/peci: Add support for PECI bus driver core Jae Hyun Yoo
2018-02-21 17:04   ` Andrew Lunn
2018-02-21 20:31     ` Jae Hyun Yoo
2018-02-21 21:51       ` Andrew Lunn
2018-02-21 22:03         ` Jae Hyun Yoo
2018-02-21 17:58   ` Greg KH
2018-02-21 20:42     ` Jae Hyun Yoo
2018-02-22  6:54       ` Greg KH
2018-02-22 17:20         ` Jae Hyun Yoo
2018-02-22  7:01   ` kbuild test robot
2018-02-22  7:01   ` [RFC PATCH] drivers/peci: peci_match_id() can be static kbuild test robot
2018-02-22 17:25     ` Jae Hyun Yoo
2018-03-07  3:19   ` [PATCH v2 1/8] [PATCH 1/8] drivers/peci: Add support for PECI bus driver core Julia Cartwright
2018-03-07 19:03     ` Jae Hyun Yoo
2018-02-21 16:16 ` [PATCH v2 2/8] [PATCH 2/8] Documentations: dt-bindings: Add a document of PECI adapter driver for Aspeed AST24xx/25xx SoCs Jae Hyun Yoo
2018-02-21 17:13   ` Andrew Lunn
2018-02-21 20:35     ` Jae Hyun Yoo
2018-03-06 12:40   ` Pavel Machek
2018-03-06 12:54     ` Andrew Lunn
2018-03-06 13:05       ` Pavel Machek
2018-03-06 13:19         ` Arnd Bergmann
2018-03-06 19:05     ` Jae Hyun Yoo
2018-03-07 22:11       ` Pavel Machek
2018-03-09 23:41       ` Milton Miller II
2018-03-09 23:47         ` Jae Hyun Yoo
2018-02-21 16:16 ` [PATCH v2 3/8] [PATCH 3/8] ARM: dts: aspeed: peci: Add PECI node Jae Hyun Yoo
2018-02-21 16:16 ` [PATCH v2 4/8] [PATCH 4/8] drivers/peci: Add a PECI adapter driver for Aspeed AST24xx/AST25xx Jae Hyun Yoo
2018-02-21 16:16 ` [PATCH v2 5/8] [PATCH [5/8] Documentation: dt-bindings: Add a document for PECI hwmon client driver Jae Hyun Yoo
2018-02-21 16:16 ` [PATCH v2 6/8] [PATCH 6/8] Documentation: hwmon: " Jae Hyun Yoo
2018-03-06 20:28   ` Randy Dunlap
2018-03-06 21:08     ` Jae Hyun Yoo
2018-02-21 16:16 ` [PATCH v2 7/8] [PATCH 7/8] drivers/hwmon: Add a generic " Jae Hyun Yoo
2018-02-21 18:26   ` Guenter Roeck
2018-02-21 21:24     ` Jae Hyun Yoo
2018-02-21 21:48       ` Guenter Roeck
2018-02-21 23:07         ` Jae Hyun Yoo
2018-02-22  0:37           ` Andrew Lunn
2018-02-22  1:29             ` Jae Hyun Yoo
2018-02-24  0:00               ` Miguel Ojeda
2018-02-24  9:32                 ` Jae Hyun Yoo
2018-03-13  9:32   ` Stef van Os
2018-03-13 18:56     ` Jae Hyun Yoo
2018-02-21 16:16 ` [PATCH v2 8/8] [PATCH 8/8] Add a maintainer for the PECI subsystem Jae Hyun Yoo
2018-03-06 12:40 ` [PATCH v2 0/8] PECI device driver introduction Pavel Machek
2018-03-06 19:21   ` Jae Hyun Yoo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=94a31519-8009-b131-4d9e-95631b789931@linux.intel.com \
    --to=jae.hyun.yoo@linux.intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).