All of lore.kernel.org
 help / color / mirror / Atom feed
* BMC health metrics (again!)
@ 2019-04-09 16:25 Kun Yi
  2019-04-11 12:56 ` Sivas Srr
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Kun Yi @ 2019-04-09 16:25 UTC (permalink / raw)
  To: OpenBMC Maillist

[-- Attachment #1: Type: text/plain, Size: 2480 bytes --]

Hello there,

This topic has been brought up several times on the mailing list and
offline, but in general seems we as a community didn't reach a consensus on
what things would be the most valuable to monitor, and how to monitor them.
While it seems a general purposed monitoring infrastructure for OpenBMC is
a hard problem, I have some simple ideas that I hope can provide immediate
and direct benefits.

1. Monitoring host IPMI link reliability (host side)

The essentials I want are "IPMI commands sent" and "IPMI commands
succeeded" counts over time. More metrics like response time would
be helpful as well. The issue to address here: when some IPMI sensor
readings are flaky, it would be really helpful to tell from IPMI command
stats to determine whether it is a hardware issue, or IPMI issue. Moreover,
it would be a very useful regression test metric for rolling out new BMC
software.

Looking at the host IPMI side, there is some metrics exposed
through /proc/ipmi/0/si_stats if ipmi_si driver is used, but I haven't dug
into whether it contains information mapping to the interrupts. Time to
read the source code I guess.

Another idea would be to instrument caller libraries like the interfaces in
ipmitool, though I feel that approach is harder due to fragmentation of
IPMI libraries.

2. Read and expose core BMC performance metrics from procfs

This is straightforward: have a smallish daemon (or bmc-state-manager)
read,parse, and process procfs and put values on D-Bus. Core metrics I'm
interested in getting through this way: load average, memory, disk
used/available, net stats... The values can then simply be exported as IPMI
sensors or Redfish resource properties.

A nice byproduct of this effort would be a procfs parsing library. Since
different platforms would probably have different monitoring requirements
and procfs output format has no standard, I'm thinking the user would just
provide a configuration file containing list of (procfs path, property
regex, D-Bus property name), and the compile-time generated code to provide
an object for each property.

All of this is merely thoughts and nothing concrete. With that said, it
would be really great if you could provide some feedback such as "I want
this, but I really need that feature", or let me know it's all implemented
already :)

If this seems valuable, after gathering more feedback of feature
requirements, I'm going to turn them into design docs and upload for review.

-- 
Regards,
Kun

[-- Attachment #2: Type: text/html, Size: 2977 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-05-20 21:29 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-09 16:25 BMC health metrics (again!) Kun Yi
2019-04-11 12:56 ` Sivas Srr
2019-04-20  1:04   ` Kun Yi
2019-04-12 13:02 ` Andrew Geissler
2019-04-20  1:08   ` Kun Yi
2019-05-08  8:11 ` vishwa
2019-05-17  6:30   ` Neeraj Ladkani
2019-05-17  7:17     ` vishwa
2019-05-17  7:23       ` Neeraj Ladkani
2019-05-17  7:27         ` vishwa
2019-05-17 15:50           ` Kun Yi
2019-05-17 18:25             ` vishwa
2019-05-20 21:29               ` Neeraj Ladkani

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.