openbmc.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Andy Shevchenko <andriy.shevchenko@intel.com>
To: Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com>
Cc: Rob Herring <robh+dt@kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Lee Jones <lee.jones@linaro.org>,
	Jean Delvare <jdelvare@suse.com>,
	Guenter Roeck <linux@roeck-us.net>,
	Mark Rutland <mark.rutland@arm.com>,
	Joel Stanley <joel@jms.id.au>, Andrew Jeffery <andrew@aj.id.au>,
	Jonathan Corbet <corbet@lwn.net>,
	Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
	Kishon Vijay Abraham I <kishon@ti.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	"Darrick J . Wong" <darrick.wong@oracle.com>,
	Eric Sandeen <sandeen@redhat.com>, Arnd Bergmann <arnd@arndb.de>,
	Wu Hao <hao.wu@intel.com>,
	Tomohiro Kusumi <kusumi.tomohiro@gmail.com>,
	"Bryant G . Ly" <bryantly@linux.vnet.ibm.com>,
	Frederic Barrat <fbarrat@linux.vnet.ibm.com>,
	"David S . Miller" <davem@davemloft.net>,
	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Vinod Koul <vkoul@kernel.org>,
	Stephen Boyd <sboyd@codeaurora.org>,
	David Kershner <david.kershner@unisys.com>,
	Uwe Kleine-Konig <u.kleine-koenig@pengutronix.de>,
	Sagar Dharia <sdharia@codeaurora.org>,
	Johan Hovold <johan@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Juergen Gross <jgross@suse.com>,
	Cyrille Pitchen <cyrille.pitchen@wedev4u.fr>,
	Tomer Maimon <tmaimon77@gmail.com>,
	linux-hwmon@vger.kernel.org, devicetree@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org,
	openbmc@lists.ozlabs.org, Gavin Schenk <g.schenk@eckelmann.de>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Cyrille Pitchen <cyrille.pitchen@free-electrons.com>,
	Alan Cox <alan@linux.intel.com>, Andrew Lunn <andrew@lunn.ch>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Fengguang Wu <fengguang.wu@intel.com>,
	Jason M Biils <jason.m.bills@linux.intel.com>,
	Julia Cartwright <juliac@eso.teric.us>,
	Yunge Zhu <yunge.zhu@linux.intel.com>
Subject: Re: [PATCH v11 03/14] peci: Add support for PECI bus driver core
Date: Wed, 11 Dec 2019 22:18:17 +0200	[thread overview]
Message-ID: <20191211201817.GC32742@smile.fi.intel.com> (raw)
In-Reply-To: <20191211194624.2872-4-jae.hyun.yoo@linux.intel.com>

On Wed, Dec 11, 2019 at 11:46:13AM -0800, Jae Hyun Yoo wrote:
> This commit adds driver implementation for PECI bus core into linux
> driver framework.
> 
> PECI (Platform Environment Control Interface) is a one-wire bus interface
> that provides a communication channel from Intel processors and chipset
> components to external monitoring or control devices. PECI is designed to
> support the following sideband functions:
> 
> * Processor and DRAM thermal management
>   - Processor fan speed control is managed by comparing Digital Thermal
>     Sensor (DTS) thermal readings acquired via PECI against the
>     processor-specific fan speed control reference point, or TCONTROL. Both
>     TCONTROL and DTS thermal readings are accessible via the processor PECI
>     client. These variables are referenced to a common temperature, the TCC
>     activation point, and are both defined as negative offsets from that
>     reference.
>   - PECI based access to the processor package configuration space provides
>     a means for Baseboard Management Controllers (BMC) or other platform
>     management devices to actively manage the processor and memory power
>     and thermal features.
> 
> * Platform Manageability
>   - Platform manageability functions including thermal, power, and error
>     monitoring. Note that platform 'power' management includes monitoring
>     and control for both the processor and DRAM subsystem to assist with
>     data center power limiting.
>   - PECI allows read access to certain error registers in the processor MSR
>     space and status monitoring registers in the PCI configuration space
>     within the processor and downstream devices.
>   - PECI permits writes to certain registers in the processor PCI
>     configuration space.
> 
> * Processor Interface Tuning and Diagnostics
>   - Processor interface tuning and diagnostics capabilities
>     (Intel Interconnect BIST). The processors Intel Interconnect Built In
>     Self Test (Intel IBIST) allows for infield diagnostic capabilities in
>     the Intel UPI and memory controller interfaces. PECI provides a port to
>     execute these diagnostics via its PCI Configuration read and write
>     capabilities.
> 
> * Failure Analysis
>   - Output the state of the processor after a failure for analysis via
>     Crashdump.
> 
> PECI uses a single wire for self-clocking and data transfer. The bus
> requires no additional control lines. The physical layer is a self-clocked
> one-wire bus that begins each bit with a driven, rising edge from an idle
> level near zero volts. The duration of the signal driven high depends on
> whether the bit value is a logic '0' or logic '1'. PECI also includes
> variable data transfer rate established with every message. In this way, it
> is highly flexible even though underlying logic is simple.
> 
> The interface design was optimized for interfacing between an Intel
> processor and chipset components in both single processor and multiple
> processor environments. The single wire interface provides low board
> routing overhead for the multiple load connections in the congested routing
> area near the processor and chipset components. Bus speed, error checking,
> and low protocol overhead provides adequate link bandwidth and reliability
> to transfer critical device operating conditions and configuration
> information.
> 
> This implementation provides the basic framework to add PECI extensions to
> the Linux bus and device models. A hardware specific 'Adapter' driver can
> be attached to the PECI bus to provide sideband functions described above.
> It is also possible to access all devices on an adapter from userspace
> through the /dev interface. A device specific 'Client' driver also can be
> attached to the PECI bus so each processor client's features can be
> supported by the 'Client' driver through an adapter connection in the bus.

Nice, we have some drivers under drivers/hwmon. Are they using PECI? How they
will be integrated to this? Can this be part of drivers/hwmon?

> Changes since v10:

It's funny I don't remember previous version(s), but anyway I'll comment on
this later on -- it has at least several style issues / inconveniences.

> - Split out peci-dev module from peci-core module.
> - Added PECI 4.0 command set support.
> - Refined 32-bit boundary alignment for all PECI ioctl command structs.
> - Added DMA safe command buffer handling in peci-core.
> - Refined kconfig dependencies in PECI subsystem.
> - Fixed minor bugs and style issues.
> - configfs support isn't added in this patch set. Will add that using a
>   seperate patch set.

> +config PECI
> +	tristate "PECI support"
> +	select CRC8

> +	default n

As for beginning, this one is redundant.
If you have more, drop them.

> +#include <linux/bitfield.h>
> +#include <linux/crc8.h>
> +#include <linux/delay.h>
> +#include <linux/mm.h>
> +#include <linux/module.h>

> +#include <linux/of_device.h>

What about ACPI? Can you use fwnode API?

> +#include <linux/peci.h>
> +#include <linux/pm_domain.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/sched/task_stack.h>
> +#include <linux/slab.h>

-- 
With Best Regards,
Andy Shevchenko

  reply	other threads:[~2019-12-11 20:18 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-11 19:46 [PATCH v11 00/14] PECI device driver introduction Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 01/14] dt-bindings: Add PECI subsystem document Jae Hyun Yoo
2019-12-18  2:52   ` Rob Herring
2019-12-18 23:12     ` Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 02/14] Documentation: ioctl: Add ioctl numbers for PECI subsystem Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 03/14] peci: Add support for PECI bus driver core Jae Hyun Yoo
2019-12-11 20:18   ` Andy Shevchenko [this message]
2019-12-12  0:46     ` Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 04/14] dt-bindings: Add bindings document of Aspeed PECI adapter Jae Hyun Yoo
2019-12-18  2:57   ` Rob Herring
2019-12-18 23:21     ` Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 05/14] ARM: dts: aspeed: Add PECI node Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 06/14] peci: Add Aspeed PECI adapter driver Jae Hyun Yoo
2019-12-11 20:28   ` Andy Shevchenko
2019-12-12  0:50     ` Jae Hyun Yoo
2019-12-12  8:47       ` Andy Shevchenko
2019-12-12 18:51         ` Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 07/14] dt-bindings: peci: add NPCM PECI documentation Jae Hyun Yoo
2019-12-18 14:42   ` Rob Herring
2019-12-18 23:30     ` Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 08/14] ARM: dts: npcm7xx: Add PECI node Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 09/14] peci: npcm: add NPCM PECI driver Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 10/14] dt-bindings: mfd: Add Intel PECI client bindings document Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 11/14] mfd: intel-peci-client: Add Intel PECI client driver Jae Hyun Yoo
2019-12-16 16:01   ` Lee Jones
2019-12-16 21:57     ` Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 12/14] Documentation: hwmon: Add documents for PECI hwmon drivers Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 13/14] hwmon: Add PECI cputemp driver Jae Hyun Yoo
2019-12-13  6:24   ` Guenter Roeck
2019-12-16 20:43     ` Jae Hyun Yoo
2019-12-11 19:46 ` [PATCH v11 14/14] hwmon: Add PECI dimmtemp driver Jae Hyun Yoo
2019-12-13  6:32   ` Guenter Roeck
2019-12-16 21:04     ` Jae Hyun Yoo
2019-12-16 21:21       ` Guenter Roeck
2019-12-16 22:17         ` Jae Hyun Yoo
2019-12-16 23:27           ` Guenter Roeck
2019-12-16 23:31             ` Jae Hyun Yoo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191211201817.GC32742@smile.fi.intel.com \
    --to=andriy.shevchenko@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=alan@linux.intel.com \
    --cc=andrew@aj.id.au \
    --cc=andrew@lunn.ch \
    --cc=arnd@arndb.de \
    --cc=benh@kernel.crashing.org \
    --cc=bryantly@linux.vnet.ibm.com \
    --cc=corbet@lwn.net \
    --cc=cyrille.pitchen@free-electrons.com \
    --cc=cyrille.pitchen@wedev4u.fr \
    --cc=darrick.wong@oracle.com \
    --cc=davem@davemloft.net \
    --cc=david.kershner@unisys.com \
    --cc=devicetree@vger.kernel.org \
    --cc=fbarrat@linux.vnet.ibm.com \
    --cc=fengguang.wu@intel.com \
    --cc=g.schenk@eckelmann.de \
    --cc=gregkh@linuxfoundation.org \
    --cc=gustavo.pimentel@synopsys.com \
    --cc=hao.wu@intel.com \
    --cc=jae.hyun.yoo@linux.intel.com \
    --cc=jason.m.bills@linux.intel.com \
    --cc=jdelvare@suse.com \
    --cc=jgross@suse.com \
    --cc=joel@jms.id.au \
    --cc=johan@kernel.org \
    --cc=juliac@eso.teric.us \
    --cc=kishon@ti.com \
    --cc=kusumi.tomohiro@gmail.com \
    --cc=lee.jones@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-hwmon@vger.kernel.org \
    --cc=linux@roeck-us.net \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mark.rutland@arm.com \
    --cc=mchehab+samsung@kernel.org \
    --cc=openbmc@lists.ozlabs.org \
    --cc=pombredanne@nexb.com \
    --cc=rdunlap@infradead.org \
    --cc=robh+dt@kernel.org \
    --cc=sandeen@redhat.com \
    --cc=sboyd@codeaurora.org \
    --cc=sdharia@codeaurora.org \
    --cc=tglx@linutronix.de \
    --cc=tmaimon77@gmail.com \
    --cc=u.kleine-koenig@pengutronix.de \
    --cc=viresh.kumar@linaro.org \
    --cc=vkoul@kernel.org \
    --cc=yunge.zhu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).