qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: "peter.maydell@linaro.org" <peter.maydell@linaro.org>,
	"xiaoguangrong.eric@gmail.com" <xiaoguangrong.eric@gmail.com>,
	"shannon.zhaosl@gmail.com" <shannon.zhaosl@gmail.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Linuxarm <linuxarm@huawei.com>,
	Auger Eric <eric.auger@redhat.com>,
	"qemu-arm@nongnu.org" <qemu-arm@nongnu.org>,
	"xuwei \(O\)" <xuwei5@huawei.com>,
	"lersek@redhat.com" <lersek@redhat.com>
Subject: RE: [PATCH 0/5] ARM virt: Add NVDIMM support
Date: Mon, 25 Nov 2019 16:25:54 +0000	[thread overview]
Message-ID: <444efcb441fe42e5aff58b3af3ab14b4@huawei.com> (raw)
In-Reply-To: <20191125164541.3f0a593f@redhat.com>

Hi Igor,

> -----Original Message-----
> From: Igor Mammedov [mailto:imammedo@redhat.com]
> Sent: 25 November 2019 15:46
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> Cc: Auger Eric <eric.auger@redhat.com>; qemu-devel@nongnu.org;
> qemu-arm@nongnu.org; peter.maydell@linaro.org;
> shannon.zhaosl@gmail.com; xuwei (O) <xuwei5@huawei.com>;
> lersek@redhat.com; Linuxarm <linuxarm@huawei.com>
> Subject: Re: [PATCH 0/5] ARM virt: Add NVDIMM support
> 
> On Mon, 25 Nov 2019 13:20:02 +0000
> Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote:
> 
> > Hi Eric/Igor,
> >
> > > -----Original Message-----
> > > From: Shameerali Kolothum Thodi
> > > Sent: 22 October 2019 15:05
> > > To: 'Auger Eric' <eric.auger@redhat.com>; qemu-devel@nongnu.org;
> > > qemu-arm@nongnu.org; imammedo@redhat.com
> > > Cc: peter.maydell@linaro.org; shannon.zhaosl@gmail.com; xuwei (O)
> > > <xuwei5@huawei.com>; lersek@redhat.com; Linuxarm
> > > <linuxarm@huawei.com>
> > > Subject: RE: [PATCH 0/5] ARM virt: Add NVDIMM support
> 
> not related to problem discussed in this patch but you probably
> need to update docs/specs/acpi_nvdimm.txt to account for your changes

Ok.

> > >
> >
> > [..]
> >
> > > > one question: I noticed that when a NVDIMM slot is hotplugged one get
> > > > the following trace on guest:
> > > >
> > > > nfit ACPI0012:00: found a zero length table '0' parsing nfit
> > > > pmem0: detected capacity change from 0 to 1073741824
> > > >
> > > > Have you experienced the 0 length trace?
> > >
> > > I double checked and yes that trace is there. And I did a quick check with
> > > x86 and it is not there.
> > >
> > > The reason looks like, ARM64 kernel receives an additional 8 bytes size
> when
> > > the kernel evaluates the "_FIT" object.
> > >
> > > For the same test scenario, Qemu reports a FIT buffer size of 0xb8 and
> > >
> > > X86 Guest kernel,
> > > [    1.601077] acpi_nfit_init: data 0xffff8a273dc12b18 sz 0xb8
> > >
> > > ARM64 Guest,
> > > [    0.933133] acpi_nfit_init: data 0xffff00003cbe6018 sz 0xc0
> > >
> > > I am not sure how that size gets changed for ARM which results in
> > > the above mentioned 0 length trace. I need to debug this further.
> > >
> > > Please let me know if you have any pointers...
> >
> > I spend some time debugging this further and it looks like the AML code
> > behaves differently on x86 and ARM64.
> FIT table is built dynamically and you are the first to debug
> such issue.
> (apart from the author the NVDIMM code.
:)
>  btw: why NVDIMM author is not on CC list???)

Right. Missed that. CCd.
 
> 
> > Booted guest with nvdimm mem, and used SSDT override with dbg prints
> > added,
> >
> > -object memory-backend-ram,id=mem1,size=1G \
> > -device nvdimm,id=dimm1,memdev=mem1 \
> >
> > On X86,
> > -----------
> >
> > [Qemu]NVDIMM:nvdimm_dsm_func_read_fit: Read FIT: offset 0 FIT size 0xb8
> Dirty Yes.
> > [Qemu]NVDIMM:nvdimm_dsm_func_read_fit: read_fit_out buf size 0xc0
> func_ret_status 0
> >
> > [AML]"NVDIMM-NCAL: Rcvd RLEN 000000C0"
> > [AML]"NVDIMM-NCAL: Creating OBUF with 000005E0 bits"
> > [AML]"NVDIMM-NCAL: Created  BUF(Local7) size 000000BC"
> > [AML]"NVDIMM-RFIT Rcvd buf size 000000BC"
> > [AML]"NVDIMM-RFIT Created NVDR.RFIT.BUFF size 000000B8"
> > [AML]"NVDIMM-FIT: Rcvd buf size 000000B8"
> >
> > [Qemu]NVDIMM:nvdimm_dsm_func_read_fit: Read FIT: offset 0xb8 FIT size
> 0xb8 Dirty No.
> > [Qemu]NVDIMM:nvdimm_dsm_func_read_fit: read_fit_out buf size 0x8
> func_ret_status 0
> >
> > [AML]"NVDIMM-NCAL: Rcvd RLEN 00000008"
> > [AML]"NVDIMM-NCAL: Creating OBUF with 00000020 bits"
> > [AML]"NVDIMM-NCAL: Created  BUF(Local7) size 00000004"
> > [AML]"NVDIMM-RFIT Rcvd buf size 00000004"
> > [AML]"NVDIMM-FIT: Rcvd buf size 00000000"
> > [AML]"NVDIMM-FIT: _FIT returned size 000000B8"
> >
> > [ KERNEL] acpi_nfit_init: NVDIMM: data 0xffff9855bb9a7518 sz 0xb8  -->
> Guest receives correct size(0xb8) here
> >
> > On ARM64,
> > ---------------
> >
> > [Qemu]NVDIMM:nvdimm_dsm_func_read_fit: Read FIT: offset 0 FIT size 0xb8
> Dirty Yes.
> > [Qemu]VDIMM:nvdimm_dsm_func_read_fit: read_fit_out buf size 0xc0
> func_ret_status 0
> >
> > [AML]"NVDIMM-NCAL: Rcvd RLEN 00000000000000C0"
> > [AML]"NVDIMM-NCAL: Creating OBUF with 00000000000005E0 bits"
> > [AML]"NVDIMM-NCAL: Created  BUF(Local7) size 00000000000000BC"
> > [AML]"NVDIMM-RFIT Rcvd buf size 00000000000000BC"
> > [AML]"NVDIMM-RFIT Created NVDR.RFIT.BUFF size 00000000000000B8"
> > [AML]"NVDIMM-FIT: Rcvd buf size 00000000000000B8"
> >
> > [Qemu]NVDIMM:nvdimm_dsm_func_read_fit: Read FIT: offset 0xb8 FIT size
> 0xb8 Dirty No.
> > [Qemu]NVDIMM:nvdimm_dsm_func_read_fit: read_fit_out buf size 0x8
> func_ret_status 0
> >
> > [AML]"NVDIMM-NCAL: Rcvd RLEN 0000000000000008"
> > [AML]"NVDIMM-NCAL: Creating OBUF with 0000000000000020 bits"  --> All
> looks same as x86 up to here.
> > [AML]"NVDIMM-NCAL: Created  BUF(Local7) size 0000000000000008"  --->
> The size goes wrong. 8 bytes instead of 4!.
> 
> > [AML]"NVDIMM-RFIT Rcvd buf size 0000000000000008"
> > [AML]"NVDIMM-RFIT Created NVDR.RFIT.BUFF size 0000000000000004"
> > [AML]"NVDIMM-FIT: Rcvd buf size 0000000000000008"  --> Again size goes
> wrong. 8 bytes instead of 4!.
> 
>                 Local1 = SizeOf (Local0)
>                 printf("NVDIMM-RFIT Rcvd buf size %o", Local1)
> 
>                 Local1 -= 0x04
>        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ here you get -4 so sizes you
> see are consistent

Hmm...IIUC, Method(RFIT, ) creates a buffer of size 0x4 as per 
this,

"NVDIMM-RFIT Created NVDR.RFIT.BUFF size 0000000000000004"

And this is the buffer received by Method(_FIT, ), 

                    Local0 = RFIT (Local3)
                    Local1 = SizeOf (Local0)
                    printf("NVDIMM-FIT: Rcvd buf size %o", Local1)

But "NVDIMM-FIT: Rcvd buf size 0000000000000008".

As you can see above, in the first iteration the buffer size received by _FIT is
actually what RFIT created(0xB8).

> 
>                 If ((Local1 == Zero))
>                 {
>                     Return (Buffer (Zero){})
>                 }
> 
>                 CreateField (Local0, 0x20, (Local1 << 0x03), BUFF)
>                 printf("NVDIMM-RFIT Created NVDR.RFIT.BUFF size %o",
> Local1)
> 
> 
> > [Qemu]NVDIMM:nvdimm_dsm_func_read_fit: Read FIT: offset 0xc0 FIT size
> 0xb8 Dirty No.  -->Another read is attempted
> > [Qemu]NVDIMM:nvdimm_dsm_func_read_fit: read_fit_out buf size 0x8
> func_ret_status 3  --> Error status returned
> 
> status 3 means that QEMU didn't like content of NRAM, and there is only
> 1 place like this in nvdimm_dsm_func_read_fit()
>     if (read_fit->offset > fit->len) {
>         func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID;
>         goto exit;
>     }
> 
> so I'd start looking from here and check that QEMU gets expected data
> in nvdimm_dsm_write(). In other words I'd try to trace/compare
> content of DSM buffer (from qemu side).

I had printed the DSM buffer previously and it looked same, I will double check
that.
 
> 
> > [AML]"NVDIMM-NCAL: Rcvd RLEN 0000000000000008"
> > [AML]"NVDIMM-NCAL: Creating OBUF with 0000000000000020 bits"
> > [AML]"NVDIMM-NCAL: Created  BUF(Local7) size 0000000000000008"
> > [AML]"NVDIMM-FIT: Rcvd buf size 0000000000000000"
> RFIT returns 0-sized buffer in case of error

Yes.
                If ((Zero != STAU))
                {
                    Return (Buffer (Zero){})
                }
 
> > [AML]"NVDIMM-FIT: _FIT returned size 00000000000000C0"   --> Wrong
> size returned.
> after that it goes
> 
>                     Local0 = Buffer (Zero){}
>                     Local0 = RFIT (Local3)
>                     Local1 = SizeOf (Local0)
>                     printf("NVDIMM-FIT: Rcvd buf size %o", Local1)
>                     If ((RSTA == 0x0100))
>                     {
>                         Local2 = Buffer (Zero){}
>                         Local3 = Zero
>                     }
>                     Else
>                     {
>                         If ((Local1 == Zero))
>   --> here, probably on the second iteration of the loop taking Zero
>       size as EOF sign but without checking for any error (RSTA !=0)
>       and returning partial buffer only

I guess you meant the third iteration. Yes, it doesn't check for RSTA != 0
case. And in the second iteration, since status == 0 and buffer len received is
8 bytes, 
                        Concatenate (Local2, Local0, Local2)

The final buffer returned to Guest becomes 0xC0 (0xb8 + 0x8).
 
>                         {
>                            printf("NVDIMM-FIT: _FIT returned size %o",
> SizeOf(Local2))
>                            Return (Local2)
>                         }
> 
> relevant code in QEMU that builds this AML is in nvdimm_build_fit()
> 
> > [ KERNEL] acpi_nfit_init: NVDIMM: data 0xffff0000fc57ce18 sz 0xc0
> -->Kernel gets 0xc0 instead of 0xb8
> >
> >
> > It looks like the aml, "CreateField (ODAT, Zero, Local1, OBUF)" goes wrong for
> > ARM64 when the buffer is all zeroes. My knowledge on aml is very limited and
> not
> > sure this is a 32/64bit issue or not. I am attaching the SSDT files with the
> above
> > dbg prints added. Could you please take a look and let me know what actually
> is
> > going on here...
> 
> diff -u SSDT-dbg-x86.dsl SSDT-dbg-arm64.dsl
> --- SSDT-dbg-x86.dsl	2019-11-25 14:50:22.024983026 +0100
> +++ SSDT-dbg-arm64.dsl	2019-11-25 14:50:06.740690645 +0100
> @@[...]
> @@ -28,7 +28,7 @@
>              Method (NCAL, 5, Serialized)
>              {
>                  Local6 = MEMA /* \MEMA */
> -                OperationRegion (NPIO, SystemIO, 0x0A18, 0x04)
> +                OperationRegion (NPIO, SystemMemory, 0x09090000,
> 0x04)
> that's fine and matches your code
> 
>                  OperationRegion (NRAM, SystemMemory, Local6, 0x1000)
>                  Field (NPIO, DWordAcc, NoLock, Preserve)
>                  {
> @@ -210,6 +210,6 @@
>          }
>      }
> 
> -    Name (MEMA, 0xBFBFD000)
> +    Name (MEMA, 0xFFFF0000)
> 
> However value here is suspicious. If I recall right it should
> point to DMS buffer allocated by firmware somewhere in the guest RAM.

But then it will be horribly wrong and will result in inconsistent behavior
as well, right?

Thanks,
Shameer
 
> 
>  }
> 
> > Much appreciated,
> > Shameer.
> >
> >



  reply	other threads:[~2019-11-25 16:27 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-04 15:52 [PATCH 0/5] ARM virt: Add NVDIMM support Shameer Kolothum
2019-10-04 15:52 ` [PATCH 1/5] hw/arm: Align ACPI blob len to PAGE size Shameer Kolothum
2019-11-08 16:17   ` Igor Mammedov
2019-11-11 12:47     ` Shameerali Kolothum Thodi
2019-12-09 13:04       ` Shameerali Kolothum Thodi
2019-10-04 15:52 ` [PATCH 2/5] nvdimm: Use configurable ACPI IO base and size Shameer Kolothum
2019-11-11 14:24   ` Igor Mammedov
2019-10-04 15:53 ` [PATCH 3/5] hw/arm/virt: Add nvdimm hot-plug infrastructure Shameer Kolothum
2019-11-11 14:38   ` Igor Mammedov
2019-10-04 15:53 ` [PATCH 4/5] hw/arm/boot: Expose the pmem nodes in the DT Shameer Kolothum
2019-11-11 14:46   ` Igor Mammedov
2019-10-04 15:53 ` [PATCH 5/5] hw/arm/virt: Add nvdimm hotplug support Shameer Kolothum
2019-11-12 13:01   ` Igor Mammedov
2019-10-18 16:39 ` [PATCH 0/5] ARM virt: Add NVDIMM support Auger Eric
2019-10-22 14:05   ` Shameerali Kolothum Thodi
2019-11-25 13:20   ` Shameerali Kolothum Thodi
2019-11-25 15:45     ` Igor Mammedov
2019-11-25 16:25       ` Shameerali Kolothum Thodi [this message]
2019-11-26  8:56         ` Igor Mammedov
2019-11-26  9:46           ` Andrew Jones
2019-11-28 12:36           ` Shameerali Kolothum Thodi
2019-12-09 17:39           ` Shameerali Kolothum Thodi
2019-12-11  7:57             ` Igor Mammedov
2019-12-13 12:52               ` Shameerali Kolothum Thodi
2020-01-06 17:06               ` Shameerali Kolothum Thodi
2020-01-09 17:13                 ` Igor Mammedov
2020-01-13 13:11                   ` Shameerali Kolothum Thodi
2019-11-12 14:39 ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=444efcb441fe42e5aff58b3af3ab14b4@huawei.com \
    --to=shameerali.kolothum.thodi@huawei.com \
    --cc=eric.auger@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=lersek@redhat.com \
    --cc=linuxarm@huawei.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=shannon.zhaosl@gmail.com \
    --cc=xiaoguangrong.eric@gmail.com \
    --cc=xuwei5@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).