Openbmc archive at lore.kernel.org
 help / color / Atom feed
* [redfish/v1/Systems/system/Processors] How does it work on wolf pass?
@ 2020-10-10  0:57 Zhenfei Tai
  2020-10-12 16:23 ` Bills, Jason M
  0 siblings, 1 reply; 4+ messages in thread
From: Zhenfei Tai @ 2020-10-10  0:57 UTC (permalink / raw)
  To: OpenBMC Maillist; +Cc: Ed Tanous


[-- Attachment #1: Type: text/plain, Size: 1523 bytes --]

Hi,

I've been testing bmcweb and noticed the response from the URI
`redfish/v1/Systems/system/Processors` contains an empty collection.

{
  "@odata.context":
"/redfish/v1/$metadata#ProcessorCollection.ProcessorCollection",
  "@odata.id": "/redfish/v1/Systems/system/Processors/",
  "@odata.type": "#ProcessorCollection.ProcessorCollection",
  "Members": [],
  "Members@odata.count": 0,
  "Name": "Processor Collection"
}

Looking at bmcweb code, it seems to look for dbus interfaces
`xyz.openbmc_project.Inventory.Item.Cpu` and
`xyz.openbmc_project.Inventory.Item.Accelerator`. However they can't be
seen in dbus.

# busctl tree --no-pager xyz.openbmc_project.Inventory.Item.Cpu
Failed to introspect object / of service
xyz.openbmc_project.Inventory.Item.Cpu: The name is not activatable

Entity-manager and cpu-sensor are running in addition to bmcweb. The
entity-manager config is below and I can see the config is picked up in
`xyz.openbmc_project.EntityManager`.

{
  "Exposes": [
    {
        "Address": "0x30",
        "Bus": 0,
        "CpuID": 1,
        "Name": "CPU 1",
        "Type": "XeonCPU"
    },
    {
        "Address": "0x31",
        "Bus": 0,
        "CpuID": 2,
        "Name": "CPU 2",
        "Type": "XeonCPU"
    }
  ],
  "Name": "internal_code_name",
  "Probe": "xyz.openbmc_project.FruDevice({'BOARD_PRODUCT_NAME':
'internal_product_name'})",
  "Type": "Board"
}

I'm not sure what else is required to have the URI work properly. Could
someone familiar with this issue help?

Thanks,
Zhenfei

[-- Attachment #2: Type: text/html, Size: 2526 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [redfish/v1/Systems/system/Processors] How does it work on wolf pass?
  2020-10-10  0:57 [redfish/v1/Systems/system/Processors] How does it work on wolf pass? Zhenfei Tai
@ 2020-10-12 16:23 ` Bills, Jason M
  2020-10-14  9:06   ` Brad Chou
  0 siblings, 1 reply; 4+ messages in thread
From: Bills, Jason M @ 2020-10-12 16:23 UTC (permalink / raw)
  To: openbmc



On 10/9/2020 5:57 PM, Zhenfei Tai wrote:
> Hi,
> 
> I've been testing bmcweb and noticed the response from the URI 
> `redfish/v1/Systems/system/Processors` contains an empty collection.
> 
> {
>    "@odata.context": 
> "/redfish/v1/$metadata#ProcessorCollection.ProcessorCollection",
>    "@odata.id <http://odata.id>": "/redfish/v1/Systems/system/Processors/",
>    "@odata.type": "#ProcessorCollection.ProcessorCollection",
>    "Members": [],
>    "Members@odata.count": 0,
>    "Name": "Processor Collection"
> }
> 
> Looking at bmcweb code, it seems to look for dbus interfaces 
> `xyz.openbmc_project.Inventory.Item.Cpu` and 
> `xyz.openbmc_project.Inventory.Item.Accelerator`. However they can't be 
> seen in dbus.
> 
> # busctl tree --no-pager xyz.openbmc_project.Inventory.Item.Cpu
> Failed to introspect object / of service 
> xyz.openbmc_project.Inventory.Item.Cpu: The name is not activatable
> 
> Entity-manager and cpu-sensor are running in addition to bmcweb. The 
> entity-manager config is below and I can see the config is picked up in 
> `xyz.openbmc_project.EntityManager`.
> 
> {
>    "Exposes": [
>      {
>          "Address": "0x30",
>          "Bus": 0,
>          "CpuID": 1,
>          "Name": "CPU 1",
>          "Type": "XeonCPU"
>      },
>      {
>          "Address": "0x31",
>          "Bus": 0,
>          "CpuID": 2,
>          "Name": "CPU 2",
>          "Type": "XeonCPU"
>      }
>    ],
>    "Name": "internal_code_name",
>    "Probe": "xyz.openbmc_project.FruDevice({'BOARD_PRODUCT_NAME': 
> 'internal_product_name'})",
>    "Type": "Board"
> }
> 
> I'm not sure what else is required to have the URI work properly. Could 
> someone familiar with this issue help?
On Intel systems, we currently get most CPU information from the SMBIOS 
tables which are provided to the BMC through something called the MDR. 
That application is available here: https://github.com/Intel-BMC/mdrv2.

When we have seen empty CPU or memory resource collections in Redfish, 
it has usually been caused by a failure to get the SMBIOS data from BIOS.

> 
> Thanks,
> Zhenfei

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [redfish/v1/Systems/system/Processors] How does it work on wolf pass?
  2020-10-12 16:23 ` Bills, Jason M
@ 2020-10-14  9:06   ` Brad Chou
  2020-10-14  9:57     ` Andrei Kartashev
  0 siblings, 1 reply; 4+ messages in thread
From: Brad Chou @ 2020-10-14  9:06 UTC (permalink / raw)
  To: Bills, Jason M, openbmc

Hi Bills,

I am also interesting in this kind of SMBIOS table processing.

May I know if Intel has plan to pull this feature into 
https://github.com/openbmc ?


By the way, I notice that a patch is also required to make it work.

https://github.com/Intel-BMC/openbmc/blob/4c732e83b4ca9a869c0a3f6e9b7e22ac9c76a78f/meta-openbmc-mods/meta-common/recipes-kernel/linux/linux-aspeed/0035-Implement-a-memory-driver-share-memory.patch

It says BIOS is using BMC VGA shared memory to transfer the whole SMBIOS 
table to BMC, particularly a 16MB size memory allocated at 0x9ff0:0000.

My question is, if the BMC VGA memory hardware strap settings is 64MB, 
that is BMC already occupy all VGA memory as frame buffer.

Can BIOS still use the VGA share memory to transfer SMBIOS table ?


Brad Chou


On 10/13/20 12:23 AM, Bills, Jason M wrote:
> [External E-mail] CAUTION: This email originated from outside of the 
> organization. Do not click links or open attachments unless you 
> recognize the sender and know the content is safe.
>
>
>
>
> On 10/9/2020 5:57 PM, Zhenfei Tai wrote:
>> Hi,
>>
>> I've been testing bmcweb and noticed the response from the URI 
>> `redfish/v1/Systems/system/Processors` contains an empty collection.
>>
>> {
>>    "@odata.context": 
>> "/redfish/v1/$metadata#ProcessorCollection.ProcessorCollection",
>>    "@odata.id <http://odata.id>": 
>> "/redfish/v1/Systems/system/Processors/",
>>    "@odata.type": "#ProcessorCollection.ProcessorCollection",
>>    "Members": [],
>>    "Members@odata.count": 0,
>>    "Name": "Processor Collection"
>> }
>>
>> Looking at bmcweb code, it seems to look for dbus interfaces 
>> `xyz.openbmc_project.Inventory.Item.Cpu` and 
>> `xyz.openbmc_project.Inventory.Item.Accelerator`. However they can't 
>> be seen in dbus.
>>
>> # busctl tree --no-pager xyz.openbmc_project.Inventory.Item.Cpu
>> Failed to introspect object / of service 
>> xyz.openbmc_project.Inventory.Item.Cpu: The name is not activatable
>>
>> Entity-manager and cpu-sensor are running in addition to bmcweb. The 
>> entity-manager config is below and I can see the config is picked up 
>> in `xyz.openbmc_project.EntityManager`.
>>
>> {
>>    "Exposes": [
>>      {
>>          "Address": "0x30",
>>          "Bus": 0,
>>          "CpuID": 1,
>>          "Name": "CPU 1",
>>          "Type": "XeonCPU"
>>      },
>>      {
>>          "Address": "0x31",
>>          "Bus": 0,
>>          "CpuID": 2,
>>          "Name": "CPU 2",
>>          "Type": "XeonCPU"
>>      }
>>    ],
>>    "Name": "internal_code_name",
>>    "Probe": "xyz.openbmc_project.FruDevice({'BOARD_PRODUCT_NAME': 
>> 'internal_product_name'})",
>>    "Type": "Board"
>> }
>>
>> I'm not sure what else is required to have the URI work properly. 
>> Could someone familiar with this issue help?
> On Intel systems, we currently get most CPU information from the 
> SMBIOS tables which are provided to the BMC through something called 
> the MDR. That application is available here: 
> https://github.com/Intel-BMC/mdrv2.
>
> When we have seen empty CPU or memory resource collections in Redfish, 
> it has usually been caused by a failure to get the SMBIOS data from BIOS.
>
>>
>> Thanks,
>> Zhenfei
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [redfish/v1/Systems/system/Processors] How does it work on wolf pass?
  2020-10-14  9:06   ` Brad Chou
@ 2020-10-14  9:57     ` Andrei Kartashev
  0 siblings, 0 replies; 4+ messages in thread
From: Andrei Kartashev @ 2020-10-14  9:57 UTC (permalink / raw)
  To: openbmc

Hi Brad,

It is already pulled: https://github.com/openbmc/smbios-mdr/
But.. The main problem here is that the BIOS has to know it should
share smbios table. Currently we struggling this problem since our BIOS
lack this ability and we have to implement it in BIOS.

BTW, I first time see this patch. According to what I know, today's
implementation is sending smbios table from BIOS to BMC via ipmi. There
are OEM commands for this implemented in intel-ipmi-oem.

On Wed, 2020-10-14 at 17:06 +0800, Brad Chou wrote:
> Hi Bills,
> 
> I am also interesting in this kind of SMBIOS table processing.
> 
> May I know if Intel has plan to pull this feature into 
> https://github.com/openbmc ?
> 
> 
> By the way, I notice that a patch is also required to make it work.
> 
> https://github.com/Intel-BMC/openbmc/blob/4c732e83b4ca9a869c0a3f6e9b7e22ac9c76a78f/meta-openbmc-mods/meta-common/recipes-kernel/linux/linux-aspeed/0035-Implement-a-memory-driver-share-memory.patch
> 
> It says BIOS is using BMC VGA shared memory to transfer the whole
> SMBIOS 
> table to BMC, particularly a 16MB size memory allocated at
> 0x9ff0:0000.
> 
> My question is, if the BMC VGA memory hardware strap settings is
> 64MB, 
> that is BMC already occupy all VGA memory as frame buffer.
> 
> Can BIOS still use the VGA share memory to transfer SMBIOS table ?
> 
> 
> Brad Chou
> 
> 
> On 10/13/20 12:23 AM, Bills, Jason M wrote:
> > [External E-mail] CAUTION: This email originated from outside of
> > the 
> > organization. Do not click links or open attachments unless you 
> > recognize the sender and know the content is safe.
> > 
> > 
> > 
> > 
> > On 10/9/2020 5:57 PM, Zhenfei Tai wrote:
> > > Hi,
> > > 
> > > I've been testing bmcweb and noticed the response from the URI 
> > > `redfish/v1/Systems/system/Processors` contains an empty
> > > collection.
> > > 
> > > {
> > >    "@odata.context": 
> > > "/redfish/v1/$metadata#ProcessorCollection.ProcessorCollection",
> > >    "@odata.id <http://odata.id>;": 
> > > "/redfish/v1/Systems/system/Processors/",
> > >    "@odata.type": "#ProcessorCollection.ProcessorCollection",
> > >    "Members": [],
> > >    "Members@odata.count": 0,
> > >    "Name": "Processor Collection"
> > > }
> > > 
> > > Looking at bmcweb code, it seems to look for dbus interfaces 
> > > `xyz.openbmc_project.Inventory.Item.Cpu` and 
> > > `xyz.openbmc_project.Inventory.Item.Accelerator`. However they
> > > can't 
> > > be seen in dbus.
> > > 
> > > # busctl tree --no-pager xyz.openbmc_project.Inventory.Item.Cpu
> > > Failed to introspect object / of service 
> > > xyz.openbmc_project.Inventory.Item.Cpu: The name is not
> > > activatable
> > > 
> > > Entity-manager and cpu-sensor are running in addition to bmcweb.
> > > The 
> > > entity-manager config is below and I can see the config is picked
> > > up 
> > > in `xyz.openbmc_project.EntityManager`.
> > > 
> > > {
> > >    "Exposes": [
> > >      {
> > >          "Address": "0x30",
> > >          "Bus": 0,
> > >          "CpuID": 1,
> > >          "Name": "CPU 1",
> > >          "Type": "XeonCPU"
> > >      },
> > >      {
> > >          "Address": "0x31",
> > >          "Bus": 0,
> > >          "CpuID": 2,
> > >          "Name": "CPU 2",
> > >          "Type": "XeonCPU"
> > >      }
> > >    ],
> > >    "Name": "internal_code_name",
> > >    "Probe":
> > > "xyz.openbmc_project.FruDevice({'BOARD_PRODUCT_NAME': 
> > > 'internal_product_name'})",
> > >    "Type": "Board"
> > > }
> > > 
> > > I'm not sure what else is required to have the URI work
> > > properly. 
> > > Could someone familiar with this issue help?
> > On Intel systems, we currently get most CPU information from the 
> > SMBIOS tables which are provided to the BMC through something
> > called 
> > the MDR. That application is available here: 
> > https://github.com/Intel-BMC/mdrv2.
> > 
> > When we have seen empty CPU or memory resource collections in
> > Redfish, 
> > it has usually been caused by a failure to get the SMBIOS data from
> > BIOS.
> > 
> > > Thanks,
> > > Zhenfei
-- 
Best regards,
Andrei Kartashev



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, back to index

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-10  0:57 [redfish/v1/Systems/system/Processors] How does it work on wolf pass? Zhenfei Tai
2020-10-12 16:23 ` Bills, Jason M
2020-10-14  9:06   ` Brad Chou
2020-10-14  9:57     ` Andrei Kartashev

Openbmc archive at lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/openbmc/0 openbmc/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 openbmc openbmc/ https://lore.kernel.org/openbmc \
		openbmc@lists.ozlabs.org openbmc@ozlabs.org
	public-inbox-index openbmc

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.ozlabs.lists.openbmc


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git