openbmc.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Andrei Kartashev <a.kartashev@yadro.com>
To: <openbmc@lists.ozlabs.org>
Subject: Re: [redfish/v1/Systems/system/Processors] How does it work on wolf pass?
Date: Wed, 14 Oct 2020 12:57:50 +0300	[thread overview]
Message-ID: <6853d70a5f647ac66dded94db7425de046e238cb.camel@yadro.com> (raw)
In-Reply-To: <ea9b85fb-8951-5c25-bc42-6f6e636d347e@hyvedesignsolutions.com>

Hi Brad,

It is already pulled: https://github.com/openbmc/smbios-mdr/
But.. The main problem here is that the BIOS has to know it should
share smbios table. Currently we struggling this problem since our BIOS
lack this ability and we have to implement it in BIOS.

BTW, I first time see this patch. According to what I know, today's
implementation is sending smbios table from BIOS to BMC via ipmi. There
are OEM commands for this implemented in intel-ipmi-oem.

On Wed, 2020-10-14 at 17:06 +0800, Brad Chou wrote:
> Hi Bills,
> 
> I am also interesting in this kind of SMBIOS table processing.
> 
> May I know if Intel has plan to pull this feature into 
> https://github.com/openbmc ?
> 
> 
> By the way, I notice that a patch is also required to make it work.
> 
> https://github.com/Intel-BMC/openbmc/blob/4c732e83b4ca9a869c0a3f6e9b7e22ac9c76a78f/meta-openbmc-mods/meta-common/recipes-kernel/linux/linux-aspeed/0035-Implement-a-memory-driver-share-memory.patch
> 
> It says BIOS is using BMC VGA shared memory to transfer the whole
> SMBIOS 
> table to BMC, particularly a 16MB size memory allocated at
> 0x9ff0:0000.
> 
> My question is, if the BMC VGA memory hardware strap settings is
> 64MB, 
> that is BMC already occupy all VGA memory as frame buffer.
> 
> Can BIOS still use the VGA share memory to transfer SMBIOS table ?
> 
> 
> Brad Chou
> 
> 
> On 10/13/20 12:23 AM, Bills, Jason M wrote:
> > [External E-mail] CAUTION: This email originated from outside of
> > the 
> > organization. Do not click links or open attachments unless you 
> > recognize the sender and know the content is safe.
> > 
> > 
> > 
> > 
> > On 10/9/2020 5:57 PM, Zhenfei Tai wrote:
> > > Hi,
> > > 
> > > I've been testing bmcweb and noticed the response from the URI 
> > > `redfish/v1/Systems/system/Processors` contains an empty
> > > collection.
> > > 
> > > {
> > >    "@odata.context": 
> > > "/redfish/v1/$metadata#ProcessorCollection.ProcessorCollection",
> > >    "@odata.id <http://odata.id>;": 
> > > "/redfish/v1/Systems/system/Processors/",
> > >    "@odata.type": "#ProcessorCollection.ProcessorCollection",
> > >    "Members": [],
> > >    "Members@odata.count": 0,
> > >    "Name": "Processor Collection"
> > > }
> > > 
> > > Looking at bmcweb code, it seems to look for dbus interfaces 
> > > `xyz.openbmc_project.Inventory.Item.Cpu` and 
> > > `xyz.openbmc_project.Inventory.Item.Accelerator`. However they
> > > can't 
> > > be seen in dbus.
> > > 
> > > # busctl tree --no-pager xyz.openbmc_project.Inventory.Item.Cpu
> > > Failed to introspect object / of service 
> > > xyz.openbmc_project.Inventory.Item.Cpu: The name is not
> > > activatable
> > > 
> > > Entity-manager and cpu-sensor are running in addition to bmcweb.
> > > The 
> > > entity-manager config is below and I can see the config is picked
> > > up 
> > > in `xyz.openbmc_project.EntityManager`.
> > > 
> > > {
> > >    "Exposes": [
> > >      {
> > >          "Address": "0x30",
> > >          "Bus": 0,
> > >          "CpuID": 1,
> > >          "Name": "CPU 1",
> > >          "Type": "XeonCPU"
> > >      },
> > >      {
> > >          "Address": "0x31",
> > >          "Bus": 0,
> > >          "CpuID": 2,
> > >          "Name": "CPU 2",
> > >          "Type": "XeonCPU"
> > >      }
> > >    ],
> > >    "Name": "internal_code_name",
> > >    "Probe":
> > > "xyz.openbmc_project.FruDevice({'BOARD_PRODUCT_NAME': 
> > > 'internal_product_name'})",
> > >    "Type": "Board"
> > > }
> > > 
> > > I'm not sure what else is required to have the URI work
> > > properly. 
> > > Could someone familiar with this issue help?
> > On Intel systems, we currently get most CPU information from the 
> > SMBIOS tables which are provided to the BMC through something
> > called 
> > the MDR. That application is available here: 
> > https://github.com/Intel-BMC/mdrv2.
> > 
> > When we have seen empty CPU or memory resource collections in
> > Redfish, 
> > it has usually been caused by a failure to get the SMBIOS data from
> > BIOS.
> > 
> > > Thanks,
> > > Zhenfei
-- 
Best regards,
Andrei Kartashev



      reply	other threads:[~2020-10-14  9:59 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-10  0:57 [redfish/v1/Systems/system/Processors] How does it work on wolf pass? Zhenfei Tai
2020-10-12 16:23 ` Bills, Jason M
2020-10-14  9:06   ` Brad Chou
2020-10-14  9:57     ` Andrei Kartashev [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6853d70a5f647ac66dded94db7425de046e238cb.camel@yadro.com \
    --to=a.kartashev@yadro.com \
    --cc=openbmc@lists.ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).