linux-nvdimm.lists.01.org archive mirror
 help / color / mirror / Atom feed
* Question on PMEM regions (Linux 4.9 Kernel & above)
@ 2020-06-19 23:17 Ananth, Rajesh
  2020-06-19 23:34 ` Dan Williams
  0 siblings, 1 reply; 6+ messages in thread
From: Ananth, Rajesh @ 2020-06-19 23:17 UTC (permalink / raw)
  To: linux-nvdimm

I have a question on the default REGION creation (unlabeled NVDIMM) on the Interleave Sets.  I observe that for a Single Interleave Set, the Linux Kernels earlier to 4.9 create only one "Region0->namespace0.0" (pmem0 for the entire size), but in the later Kernels I observe for the same Interleave Set it creates "Region0->namespace0.0" and "Region1->namespace1.0" by default (pmem0, pmem1 for half the size of the Interleave set).

I don't have any explicit labels created using the ndctl utilities. I just plug-in the fresh NVDIMM modules like I always do.

I searched for and found the relevant information on that front regarding the nd_pmem driver and the support for multiple pmem namespaces.  I am wondering whether is there a way I could -- through Kernel Parameters or something -- get the default behavior the same as it existed before Kernel 4.9 driver changes.

Thanks,
Rajesh

_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question on PMEM regions (Linux 4.9 Kernel & above)
  2020-06-19 23:17 Question on PMEM regions (Linux 4.9 Kernel & above) Ananth, Rajesh
@ 2020-06-19 23:34 ` Dan Williams
       [not found]   ` <BYAPR04MB4310B8A76F318E50237447E294980@BYAPR04MB4310.namprd04.prod.outlook.com>
  0 siblings, 1 reply; 6+ messages in thread
From: Dan Williams @ 2020-06-19 23:34 UTC (permalink / raw)
  To: Ananth, Rajesh; +Cc: linux-nvdimm

On Fri, Jun 19, 2020 at 4:18 PM Ananth, Rajesh <Rajesh.Ananth@smartm.com> wrote:
>
> I have a question on the default REGION creation (unlabeled NVDIMM) on the Interleave Sets.  I observe that for a Single Interleave Set, the Linux Kernels earlier to 4.9 create only one "Region0->namespace0.0" (pmem0 for the entire size), but in the later Kernels I observe for the same Interleave Set it creates "Region0->namespace0.0" and "Region1->namespace1.0" by default (pmem0, pmem1 for half the size of the Interleave set).
>
> I don't have any explicit labels created using the ndctl utilities. I just plug-in the fresh NVDIMM modules like I always do.
>
> I searched for and found the relevant information on that front regarding the nd_pmem driver and the support for multiple pmem namespaces.  I am wondering whether is there a way I could -- through Kernel Parameters or something -- get the default behavior the same as it existed before Kernel 4.9 driver changes.

How is your platform BIOS indicating the persistent memory range? I
suspect you might be using the non-standard Type-12 memory hack and
are hitting this issue:

23446cb66c07 x86/e820: Don't merge consecutive E820_PRAM ranges
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=23446cb66c07

For it to show up as one range the BIOS needs to tell Linux that it is
one coherent range. You can force the kernel to override the BIOS
provided memory map with the memmap= parameter. Some details of that
here:

https://nvdimm.wiki.kernel.org/how_to_choose_the_correct_memmap_kernel_parameter_for_pmem_on_your_system
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question on PMEM regions (Linux 4.9 Kernel & above)
       [not found]   ` <BYAPR04MB4310B8A76F318E50237447E294980@BYAPR04MB4310.namprd04.prod.outlook.com>
@ 2020-06-20  0:11     ` Dan Williams
  2020-06-20  2:02       ` Ananth, Rajesh
  0 siblings, 1 reply; 6+ messages in thread
From: Dan Williams @ 2020-06-20  0:11 UTC (permalink / raw)
  To: Ananth, Rajesh; +Cc: linux-nvdimm

[ add back linux-nvdimm as others may hit the same issue too and I
want this in the archives ]

On Fri, Jun 19, 2020 at 4:49 PM Ananth, Rajesh <Rajesh.Ananth@smartm.com> wrote:
>
> Dan,
>
> Thank you so much for your response.  Our PLATFORM is totally NFIT compliant and does not use the Type-12/E820 maps.

Ah, great.

>
> We have 2 NVDIMMs interleaved in the same Memory Channel, each 16 GB in size.
>
> This is what the 4.7.9 Kernel reports for the for "/proc/iomem":

Can you post the output of:

acpdump -n NFIT

...?

Labels can't create new regions, so there must be a behavior
difference in how these kernels are parsing this NFIT.

>
> 00001000-0009afff : System RAM
> 0009b000-0009ffff : reserved
> 000a0000-000bffff : PCI Bus 0000:00
> 000c0000-000c7fff : Video ROM
>   000c4000-000c7fff : PCI Bus 0000:00
> 000c8000-000c8dff : Adapter ROM
> 000c9000-000c9dff : Adapter ROM
> 000e0000-000fffff : reserved
>   000f0000-000fffff : System ROM
> 00100000-6984ffff : System RAM
>   2e000000-2e7f1922 : Kernel code
>   2e7f1923-2ed448ff : Kernel data
>   2eedb000-2f055fff : Kernel bss
> 69850000-6c1f8fff : reserved
>   6b1dd018-6b1dd018 : APEI ERST
>   6b1dd01c-6b1dd021 : APEI ERST
>   6b1dd028-6b1dd039 : APEI ERST
>   6b1dd040-6b1dd04c : APEI ERST
>   6b1dd050-6b1df04f : APEI ERST
> 6c1f9000-6c322fff : System RAM
> 6c323000-6ce83fff : ACPI Non-volatile Storage
> 6ce84000-6f2fcfff : reserved
> 6f2fd000-6f7fffff : System RAM
>  fec00000-fec003ff : IOAPIC 0
>   fec01000-fec013ff : IOAPIC 1
>   fec08000-fec083ff : IOAPIC 2
>   fec10000-fec103ff : IOAPIC 3
>   fec18000-fec183ff : IOAPIC 4
>   fec20000-fec203ff : IOAPIC 5
>   fec28000-fec283ff : IOAPIC 6
>   fec30000-fec303ff : IOAPIC 7
>   fec38000-fec383ff : IOAPIC 8
> fed00000-fed003ff : HPET 0
>   fed00000-fed003ff : PNP0103:00
> fed12000-fed1200f : pnp 00:01
> fed12010-fed1201f : pnp 00:01
> fed1b000-fed1bfff : pnp 00:01
> fed20000-fed44fff : reserved
> fed45000-fed8bfff : pnp 00:01
> fee00000-feefffff : pnp 00:01
>   fee00000-fee00fff : Local APIC
> ff000000-ffffffff : reserved
>   ff000000-ffffffff : pnp 00:01
> 100000000-407fffffff : System RAM
> 4080000000-487fffffff : Persistent Memory         <<<<<  PERSISTENT MEMORY
>   4080000000-487fffffff : namespace0.0
> 4880000000-887fffffff : System RAM
>
> The same system configuration under 4.16 Kernel (We just rebooted with a new Kernel):
>
> 00001000-0009afff : System RAM
> 0009b000-0009ffff : Reserved
> 000a0000-000bffff : PCI Bus 0000:00
> 000c0000-000c7fff : Video ROM
>   000c4000-000c7fff : PCI Bus 0000:00
> 000c8000-000c8dff : Adapter ROM
> 000c9000-000c9dff : Adapter ROM
> 000e0000-000fffff : Reserved
>   000f0000-000fffff : System ROM
> 00100000-6984ffff : System RAM
> 69850000-6c1f8fff : Reserved
>   6b1dd018-6b1dd018 : APEI ERST
>   6b1dd01c-6b1dd021 : APEI ERST
>   6b1dd028-6b1dd039 : APEI ERST
>   6b1dd040-6b1dd04c : APEI ERST
>   6b1dd050-6b1df04f : APEI ERST
> 6c1f9000-6c322fff : System RAM
> 6c323000-6ce83fff : ACPI Non-volatile Storage
> 6ce84000-6f2fcfff : Reserved
> 6f2fd000-6f7fffff : System RAM
> 6f800000-8fffffff : Reserved
>   80000000-8fffffff : PCI MMCONFIG 0000 [bus 00-ff]
> 90000000-9d7fffff : PCI Bus 0000:00
> fec18000-fec183ff : IOAPIC 4
>   fec20000-fec203ff : IOAPIC 5
>   fec28000-fec283ff : IOAPIC 6
>   fec30000-fec303ff : IOAPIC 7
>   fec38000-fec383ff : IOAPIC 8
> fed00000-fed003ff : HPET 0
>   fed00000-fed003ff : PNP0103:00
> fed12000-fed1200f : pnp 00:01
> fed12010-fed1201f : pnp 00:01
> fed1b000-fed1bfff : pnp 00:01
> fed20000-fed44fff : Reserved
> fed45000-fed8bfff : pnp 00:01
> fee00000-feefffff : pnp 00:01
>   fee00000-fee00fff : Local APIC
> ff000000-ffffffff : Reserved
>   ff000000-ffffffff : pnp 00:01
> 100000000-407fffffff : System RAM
> 4080000000-487fffffff : Persistent Memory             <<<  PERSISTENT MEMORY
>   4080000000-447fffffff : namespace0.0
>   4480000000-487fffffff : namespace1.0
> 4880000000-887fffffff : System RAM
>   4d15000000-4d15c031d0 : Kernel code
>   4d15c031d1-4d16387b7f : Kernel data
>   4d1692d000-4d16a82fff : Kernel bss
>
>
> Thanks,
> Rajesh
>
> -----Original Message-----
> From: Dan Williams [mailto:dan.j.williams@intel.com]
> Sent: Friday, June 19, 2020 4:34 PM
> To: Ananth, Rajesh
> Cc: linux-nvdimm@lists.01.org
> Subject: Re: Question on PMEM regions (Linux 4.9 Kernel & above)
>
> SMART Modular Security Checkpoint: External email. Please make sure you trust this source before clicking links or opening attachments.
>
> On Fri, Jun 19, 2020 at 4:18 PM Ananth, Rajesh <Rajesh.Ananth@smartm.com> wrote:
> >
> > I have a question on the default REGION creation (unlabeled NVDIMM) on the Interleave Sets.  I observe that for a Single Interleave Set, the Linux Kernels earlier to 4.9 create only one "Region0->namespace0.0" (pmem0 for the entire size), but in the later Kernels I observe for the same Interleave Set it creates "Region0->namespace0.0" and "Region1->namespace1.0" by default (pmem0, pmem1 for half the size of the Interleave set).
> >
> > I don't have any explicit labels created using the ndctl utilities. I just plug-in the fresh NVDIMM modules like I always do.
> >
> > I searched for and found the relevant information on that front regarding the nd_pmem driver and the support for multiple pmem namespaces.  I am wondering whether is there a way I could -- through Kernel Parameters or something -- get the default behavior the same as it existed before Kernel 4.9 driver changes.
>
> How is your platform BIOS indicating the persistent memory range? I
> suspect you might be using the non-standard Type-12 memory hack and
> are hitting this issue:
>
> 23446cb66c07 x86/e820: Don't merge consecutive E820_PRAM ranges
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=23446cb66c07
>
> For it to show up as one range the BIOS needs to tell Linux that it is
> one coherent range. You can force the kernel to override the BIOS
> provided memory map with the memmap= parameter. Some details of that
> here:
>
> https://nvdimm.wiki.kernel.org/how_to_choose_the_correct_memmap_kernel_parameter_for_pmem_on_your_system
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: Question on PMEM regions (Linux 4.9 Kernel & above)
  2020-06-20  0:11     ` Dan Williams
@ 2020-06-20  2:02       ` Ananth, Rajesh
  2020-06-20  3:20         ` Dan Williams
  0 siblings, 1 reply; 6+ messages in thread
From: Ananth, Rajesh @ 2020-06-20  2:02 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-nvdimm

We used the Ubuntu 18.04 to get the "acpdump" outputs (This is the only complete package distribution we have. Otherwise, we use mainly the built Kernels).  The NFIT data is all valid, but somehow it is printing the "@ addresss" at the beginning as zeros. 

=============================  acpdump -n NFIT  ========================================

NFIT @ 0x0000000000000000              <<<<  DON'T KNOW WHY. 
  0000: 4E 46 49 54 A4 01 00 00 01 83 41 4C 41 53 4B 41  NFIT......ALASKA
  0010: 41 20 4D 20 49 20 00 00 02 00 00 00 49 4E 54 4C  A M I ......INTL
  0020: 13 10 09 20 00 00 00 00 00 00 38 00 01 00 02 00  ... ......8.....
  0030: 00 00 00 00 00 00 00 00 79 D3 F0 66 F3 B4 74 40  ........y..f..t@
  0040: AC 43 0D 33 18 B7 8C DB 00 00 00 80 40 00 00 00  .C.3........@...
  0050: 00 00 00 00 04 00 00 00 08 80 00 00 00 00 00 00  ................
  0060: 04 00 50 00 01 00 01 94 53 72 01 00 01 94 4E 72  ..P.....Sr....Nr
  0070: 05 00 01 01 20 08 00 00 01 4E 2E ED 01 01 00 00  .... ....N......
  0080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  0090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  00A0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  00B0: 01 00 30 00 20 00 00 00 48 00 00 00 01 00 01 00  ..0. ...H.......
  00C0: 00 00 00 00 04 00 00 00 00 00 00 00 00 00 00 00  ................
  00D0: 00 00 00 00 00 00 00 00 00 00 01 00 21 00 00 00  ............!...
  00E0: 00 00 38 00 02 00 02 00 00 00 00 00 00 00 00 00  ..8.............
  00F0: 79 D3 F0 66 F3 B4 74 40 AC 43 0D 33 18 B7 8C DB  y..f..t@.C.3....
  0100: 00 00 00 80 44 00 00 00 00 00 00 00 04 00 00 00  ....D...........
  0110: 08 80 00 00 00 00 00 00 04 00 50 00 02 00 01 94  ..........P.....
  0120: 53 72 01 00 01 94 4E 72 05 00 01 01 20 08 00 00  Sr....Nr.... ...
  0130: 01 4E 2E ED 01 01 00 00 00 00 00 00 00 00 00 00  .N..............
  0140: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  0150: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  0160: 00 00 00 00 00 00 00 00 01 00 30 00 20 00 00 00  ..........0. ...
  0170: 48 00 01 00 02 00 02 00 00 00 00 00 04 00 00 00  H...............
  0180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  0190: 00 00 01 00 21 00 00 00 03 00 0C 00 00 00 00 00  ....!...........
  01A0: 48 00 49 00                                                                      H.I.

=============================  acpdump (all)  ========================================

NFIT @ 0x0000000000000000    <<<<  DON'T KNOW WHY.
  0000: 4E 46 49 54 A4 01 00 00 01 83 41 4C 41 53 4B 41  NFIT......ALASKA
  0010: 41 20 4D 20 49 20 00 00 02 00 00 00 49 4E 54 4C  A M I ......INTL
  0020: 13 10 09 20 00 00 00 00 00 00 38 00 01 00 02 00  ... ......8.....
  0030: 00 00 00 00 00 00 00 00 79 D3 F0 66 F3 B4 74 40  ........y..f..t@
  0040: AC 43 0D 33 18 B7 8C DB 00 00 00 80 40 00 00 00  .C.3........@...
  0050: 00 00 00 00 04 00 00 00 08 80 00 00 00 00 00 00  ................
  0060: 04 00 50 00 01 00 01 94 53 72 01 00 01 94 4E 72  ..P.....Sr....Nr
  0070: 05 00 01 01 20 08 00 00 01 4E 2E ED 01 01 00 00  .... ....N......
  0080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  0090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  00A0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  00B0: 01 00 30 00 20 00 00 00 48 00 00 00 01 00 01 00  ..0. ...H.......
  00C0: 00 00 00 00 04 00 00 00 00 00 00 00 00 00 00 00  ................
  00D0: 00 00 00 00 00 00 00 00 00 00 01 00 21 00 00 00  ............!...
  00E0: 00 00 38 00 02 00 02 00 00 00 00 00 00 00 00 00  ..8.............
  00F0: 79 D3 F0 66 F3 B4 74 40 AC 43 0D 33 18 B7 8C DB  y..f..t@.C.3....
  0100: 00 00 00 80 44 00 00 00 00 00 00 00 04 00 00 00  ....D...........
  0110: 08 80 00 00 00 00 00 00 04 00 50 00 02 00 01 94  ..........P.....
  0120: 53 72 01 00 01 94 4E 72 05 00 01 01 20 08 00 00  Sr....Nr.... ...
  0130: 01 4E 2E ED 01 01 00 00 00 00 00 00 00 00 00 00  .N..............
  0140: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  0150: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  0160: 00 00 00 00 00 00 00 00 01 00 30 00 20 00 00 00  ..........0. ...
  0170: 48 00 01 00 02 00 02 00 00 00 00 00 04 00 00 00  H...............
  0180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  0190: 00 00 01 00 21 00 00 00 03 00 0C 00 00 00 00 00  ....!...........
  01A0: 48 00 49 00                                      H.I.

MCFG @ 0x0000000000000000  <<<<  DON'T KNOW WHY.
  0000: 4D 43 46 47 3C 00 00 00 01 61 41 4C 41 53 4B 41  MCFG<....aALASKA
  0010: 41 20 4D 20 49 00 00 00 09 20 07 01 4D 53 46 54  A M I.... ..MSFT
  0020: 97 00 00 00 00 00 00 00 00 00 00 00 00 00 00 80  ................
  0030: 00 00 00 00 00 00 00 FF 00 00 00 00              ............

EINJ @ 0x0000000000000000   <<<<  DON'T KNOW WHY.
  0000: 45 49 4E 4A 50 01 00 00 01 6E 41 4C 41 53 4B 41  EINJP....nALASKA
  0010: 41 20 4D 20 49 20 00 00 01 00 00 00 49 4E 54 4C  A M I ......INTL
  0020: 01 00 00 00 0C 00 00 00 00 00 00 00 09 00 00 00  ................
  0030: 00 03 01 00 00 40 00 04 18 80 1D 6A 00 00 00 00  .....@.....j....
  0040: AA 55 AA 55 00 00 00 00 FF FF FF FF 00 00 00 00  .U.U............
  0050: 01 00 00 00 00 40 00 04 48 80 1D 6A 00 00 00 00  .....@..H..j....
  0060: 00 00 00 00 00 00 00 00 FF FF FF FF FF FF FF FF  ................
  0070: 02 02 01 00 00 40 00 04 20 80 1D 6A 00 00 00 00  .....@.. ..j....
  0080: 00 00 00 00 00 00 00 00 FF FF FF FF 00 00 00 00  ................
  0090: 03 00 00 00 00 40 00 04 50 80 1D 6A 00 00 00 00  .....@..P..j....
  00A0: 00 00 00 00 00 00 00 00 FF FF FF FF 00 00 00 00  ................
  00B0: 04 03 01 00 00 40 00 04 18 80 1D 6A 00 00 00 00  .....@.....j....
  00C0: 00 00 00 00 00 00 00 00 FF FF FF FF 00 00 00 00  ................
  00D0: 05 03 01 00 01 10 00 02 B2 00 00 00 00 00 00 00  ................
  00E0: 9A 00 00 00 00 00 00 00 FF FF 00 00 00 00 00 00  ................
  00F0: 06 01 00 00 00 40 00 04 58 80 1D 6A 00 00 00 00  .....@..X..j....
  0100: 01 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00  ................
  0110: 07 00 01 00 00 40 00 04 60 80 1D 6A 00 00 00 00  .....@..`..j....
  0120: 00 00 00 00 00 00 00 00 FE 01 00 00 00 00 00 00  ................
  0130: 08 02 01 00 00 40 00 04 78 80 1D 6A 00 00 00 00  .....@..x..j....
  0140: 00 00 00 00 00 00 00 00 FF FF FF FF 00 00 00 00  ................

  <<< REDACTED. Output too long . On request, attachment will be sent. >>>
   

Thanks,
Rajesh



-----Original Message-----
From: Dan Williams [mailto:dan.j.williams@intel.com] 
Sent: Friday, June 19, 2020 5:12 PM
To: Ananth, Rajesh
Cc: linux-nvdimm
Subject: Re: Question on PMEM regions (Linux 4.9 Kernel & above)

[ add back linux-nvdimm as others may hit the same issue too and I
want this in the archives ]

On Fri, Jun 19, 2020 at 4:49 PM Ananth, Rajesh <Rajesh.Ananth@smartm.com> wrote:
>
> Dan,
>
> Thank you so much for your response.  Our PLATFORM is totally NFIT compliant and does not use the Type-12/E820 maps.

Ah, great.

>
> We have 2 NVDIMMs interleaved in the same Memory Channel, each 16 GB in size.
>
> This is what the 4.7.9 Kernel reports for the for "/proc/iomem":

Can you post the output of:

acpdump -n NFIT

...?

Labels can't create new regions, so there must be a behavior
difference in how these kernels are parsing this NFIT.

>
> 00001000-0009afff : System RAM
> 0009b000-0009ffff : reserved
> 000a0000-000bffff : PCI Bus 0000:00
> 000c0000-000c7fff : Video ROM
>   000c4000-000c7fff : PCI Bus 0000:00
> 000c8000-000c8dff : Adapter ROM
> 000c9000-000c9dff : Adapter ROM
> 000e0000-000fffff : reserved
>   000f0000-000fffff : System ROM
> 00100000-6984ffff : System RAM
>   2e000000-2e7f1922 : Kernel code
>   2e7f1923-2ed448ff : Kernel data
>   2eedb000-2f055fff : Kernel bss
> 69850000-6c1f8fff : reserved
>   6b1dd018-6b1dd018 : APEI ERST
>   6b1dd01c-6b1dd021 : APEI ERST
>   6b1dd028-6b1dd039 : APEI ERST
>   6b1dd040-6b1dd04c : APEI ERST
>   6b1dd050-6b1df04f : APEI ERST
> 6c1f9000-6c322fff : System RAM
> 6c323000-6ce83fff : ACPI Non-volatile Storage
> 6ce84000-6f2fcfff : reserved
> 6f2fd000-6f7fffff : System RAM
>  fec00000-fec003ff : IOAPIC 0
>   fec01000-fec013ff : IOAPIC 1
>   fec08000-fec083ff : IOAPIC 2
>   fec10000-fec103ff : IOAPIC 3
>   fec18000-fec183ff : IOAPIC 4
>   fec20000-fec203ff : IOAPIC 5
>   fec28000-fec283ff : IOAPIC 6
>   fec30000-fec303ff : IOAPIC 7
>   fec38000-fec383ff : IOAPIC 8
> fed00000-fed003ff : HPET 0
>   fed00000-fed003ff : PNP0103:00
> fed12000-fed1200f : pnp 00:01
> fed12010-fed1201f : pnp 00:01
> fed1b000-fed1bfff : pnp 00:01
> fed20000-fed44fff : reserved
> fed45000-fed8bfff : pnp 00:01
> fee00000-feefffff : pnp 00:01
>   fee00000-fee00fff : Local APIC
> ff000000-ffffffff : reserved
>   ff000000-ffffffff : pnp 00:01
> 100000000-407fffffff : System RAM
> 4080000000-487fffffff : Persistent Memory         <<<<<  PERSISTENT MEMORY
>   4080000000-487fffffff : namespace0.0
> 4880000000-887fffffff : System RAM
>
> The same system configuration under 4.16 Kernel (We just rebooted with a new Kernel):
>
> 00001000-0009afff : System RAM
> 0009b000-0009ffff : Reserved
> 000a0000-000bffff : PCI Bus 0000:00
> 000c0000-000c7fff : Video ROM
>   000c4000-000c7fff : PCI Bus 0000:00
> 000c8000-000c8dff : Adapter ROM
> 000c9000-000c9dff : Adapter ROM
> 000e0000-000fffff : Reserved
>   000f0000-000fffff : System ROM
> 00100000-6984ffff : System RAM
> 69850000-6c1f8fff : Reserved
>   6b1dd018-6b1dd018 : APEI ERST
>   6b1dd01c-6b1dd021 : APEI ERST
>   6b1dd028-6b1dd039 : APEI ERST
>   6b1dd040-6b1dd04c : APEI ERST
>   6b1dd050-6b1df04f : APEI ERST
> 6c1f9000-6c322fff : System RAM
> 6c323000-6ce83fff : ACPI Non-volatile Storage
> 6ce84000-6f2fcfff : Reserved
> 6f2fd000-6f7fffff : System RAM
> 6f800000-8fffffff : Reserved
>   80000000-8fffffff : PCI MMCONFIG 0000 [bus 00-ff]
> 90000000-9d7fffff : PCI Bus 0000:00
> fec18000-fec183ff : IOAPIC 4
>   fec20000-fec203ff : IOAPIC 5
>   fec28000-fec283ff : IOAPIC 6
>   fec30000-fec303ff : IOAPIC 7
>   fec38000-fec383ff : IOAPIC 8
> fed00000-fed003ff : HPET 0
>   fed00000-fed003ff : PNP0103:00
> fed12000-fed1200f : pnp 00:01
> fed12010-fed1201f : pnp 00:01
> fed1b000-fed1bfff : pnp 00:01
> fed20000-fed44fff : Reserved
> fed45000-fed8bfff : pnp 00:01
> fee00000-feefffff : pnp 00:01
>   fee00000-fee00fff : Local APIC
> ff000000-ffffffff : Reserved
>   ff000000-ffffffff : pnp 00:01
> 100000000-407fffffff : System RAM
> 4080000000-487fffffff : Persistent Memory             <<<  PERSISTENT MEMORY
>   4080000000-447fffffff : namespace0.0
>   4480000000-487fffffff : namespace1.0
> 4880000000-887fffffff : System RAM
>   4d15000000-4d15c031d0 : Kernel code
>   4d15c031d1-4d16387b7f : Kernel data
>   4d1692d000-4d16a82fff : Kernel bss
>
>
> Thanks,
> Rajesh
>
> -----Original Message-----
> From: Dan Williams [mailto:dan.j.williams@intel.com]
> Sent: Friday, June 19, 2020 4:34 PM
> To: Ananth, Rajesh
> Cc: linux-nvdimm@lists.01.org
> Subject: Re: Question on PMEM regions (Linux 4.9 Kernel & above)
>
> SMART Modular Security Checkpoint: External email. Please make sure you trust this source before clicking links or opening attachments.
>
> On Fri, Jun 19, 2020 at 4:18 PM Ananth, Rajesh <Rajesh.Ananth@smartm.com> wrote:
> >
> > I have a question on the default REGION creation (unlabeled NVDIMM) on the Interleave Sets.  I observe that for a Single Interleave Set, the Linux Kernels earlier to 4.9 create only one "Region0->namespace0.0" (pmem0 for the entire size), but in the later Kernels I observe for the same Interleave Set it creates "Region0->namespace0.0" and "Region1->namespace1.0" by default (pmem0, pmem1 for half the size of the Interleave set).
> >
> > I don't have any explicit labels created using the ndctl utilities. I just plug-in the fresh NVDIMM modules like I always do.
> >
> > I searched for and found the relevant information on that front regarding the nd_pmem driver and the support for multiple pmem namespaces.  I am wondering whether is there a way I could -- through Kernel Parameters or something -- get the default behavior the same as it existed before Kernel 4.9 driver changes.
>
> How is your platform BIOS indicating the persistent memory range? I
> suspect you might be using the non-standard Type-12 memory hack and
> are hitting this issue:
>
> 23446cb66c07 x86/e820: Don't merge consecutive E820_PRAM ranges
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=23446cb66c07
>
> For it to show up as one range the BIOS needs to tell Linux that it is
> one coherent range. You can force the kernel to override the BIOS
> provided memory map with the memmap= parameter. Some details of that
> here:
>
> https://nvdimm.wiki.kernel.org/how_to_choose_the_correct_memmap_kernel_parameter_for_pmem_on_your_system
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question on PMEM regions (Linux 4.9 Kernel & above)
  2020-06-20  2:02       ` Ananth, Rajesh
@ 2020-06-20  3:20         ` Dan Williams
  2020-06-24 16:56           ` Ananth, Rajesh
  0 siblings, 1 reply; 6+ messages in thread
From: Dan Williams @ 2020-06-20  3:20 UTC (permalink / raw)
  To: Ananth, Rajesh; +Cc: linux-nvdimm

On Fri, Jun 19, 2020 at 7:02 PM Ananth, Rajesh <Rajesh.Ananth@smartm.com> wrote:
>
> We used the Ubuntu 18.04 to get the "acpdump" outputs (This is the only complete package distribution we have. Otherwise, we use mainly the built Kernels).  The NFIT data is all valid, but somehow it is printing the "@ addresss" at the beginning as zeros.
>
> =============================  acpdump -n NFIT  ========================================
>
> NFIT @ 0x0000000000000000              <<<<  DON'T KNOW WHY.

That's fine, acpixtract was still able to convert it... I see this
from disassembling it:

acpixtract -s NFIT nfit.txt
iasl -d nft.dat
cat nfit.dsl


[000h 0000   4]                    Signature : "NFIT"    [NVDIMM
Firmware Interface Table]
[004h 0004   4]                 Table Length : 000001A4
[008h 0008   1]                     Revision : 01
[009h 0009   1]                     Checksum : 83
[00Ah 0010   6]                       Oem ID : "ALASKA"
[010h 0016   8]                 Oem Table ID : "A M I "
[018h 0024   4]                 Oem Revision : 00000002
[01Ch 0028   4]              Asl Compiler ID : "INTL"
[020h 0032   4]        Asl Compiler Revision : 20091013

[024h 0036   4]                     Reserved : 00000000

[028h 0040   2]                Subtable Type : 0000 [System Physical
Address Range]
[02Ah 0042   2]                       Length : 0038

[02Ch 0044   2]                  Range Index : 0001
[02Eh 0046   2]        Flags (decoded below) : 0002
                   Add/Online Operation Only : 0
                      Proximity Domain Valid : 1
[030h 0048   4]                     Reserved : 00000000
[034h 0052   4]             Proximity Domain : 00000000
[038h 0056  16]             Region Type GUID :
66F0D379-B4F3-4074-AC43-0D3318B78CDB
[048h 0072   8]           Address Range Base : 0000004080000000
[050h 0080   8]         Address Range Length : 0000000400000000
[058h 0088   8]         Memory Map Attribute : 0000000000008008
[..]
[0E0h 0224   2]                Subtable Type : 0000 [System Physical
Address Range]
[0E2h 0226   2]                       Length : 0038

[0E4h 0228   2]                  Range Index : 0002
[0E6h 0230   2]        Flags (decoded below) : 0002
                   Add/Online Operation Only : 0
                      Proximity Domain Valid : 1
[0E8h 0232   4]                     Reserved : 00000000
[0ECh 0236   4]             Proximity Domain : 00000000
[0F0h 0240  16]             Region Type GUID :
66F0D379-B4F3-4074-AC43-0D3318B78CDB
[100h 0256   8]           Address Range Base : 0000004480000000
[108h 0264   8]         Address Range Length : 0000000400000000
[110h 0272   8]         Memory Map Attribute : 0000000000008008


...so Linux is being handed an NFIT with 2 regions. So the 4.16
interpretation looks correct to me. Are you sure you only changed
kernel versions and did not also do a BIOS update? If not the 4.7
result looks like a bug for this NFIT.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: Question on PMEM regions (Linux 4.9 Kernel & above)
  2020-06-20  3:20         ` Dan Williams
@ 2020-06-24 16:56           ` Ananth, Rajesh
  0 siblings, 0 replies; 6+ messages in thread
From: Ananth, Rajesh @ 2020-06-24 16:56 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-nvdimm

Thank you so much  for the details you provided.  We are working with our BIOS vendors to resolve the problem.

-Rajesh

-----Original Message-----
From: Dan Williams [mailto:dan.j.williams@intel.com] 
Sent: Friday, June 19, 2020 8:20 PM
To: Ananth, Rajesh
Cc: linux-nvdimm
Subject: Re: Question on PMEM regions (Linux 4.9 Kernel & above)

SMART Modular Security Checkpoint: External email. Please make sure you trust this source before clicking links or opening attachments.

On Fri, Jun 19, 2020 at 7:02 PM Ananth, Rajesh <Rajesh.Ananth@smartm.com> wrote:
>
> We used the Ubuntu 18.04 to get the "acpdump" outputs (This is the only complete package distribution we have. Otherwise, we use mainly the built Kernels).  The NFIT data is all valid, but somehow it is printing the "@ addresss" at the beginning as zeros.
>
> =============================  acpdump -n NFIT  ========================================
>
> NFIT @ 0x0000000000000000              <<<<  DON'T KNOW WHY.

That's fine, acpixtract was still able to convert it... I see this
from disassembling it:

acpixtract -s NFIT nfit.txt
iasl -d nft.dat
cat nfit.dsl


[000h 0000   4]                    Signature : "NFIT"    [NVDIMM
Firmware Interface Table]
[004h 0004   4]                 Table Length : 000001A4
[008h 0008   1]                     Revision : 01
[009h 0009   1]                     Checksum : 83
[00Ah 0010   6]                       Oem ID : "ALASKA"
[010h 0016   8]                 Oem Table ID : "A M I "
[018h 0024   4]                 Oem Revision : 00000002
[01Ch 0028   4]              Asl Compiler ID : "INTL"
[020h 0032   4]        Asl Compiler Revision : 20091013

[024h 0036   4]                     Reserved : 00000000

[028h 0040   2]                Subtable Type : 0000 [System Physical
Address Range]
[02Ah 0042   2]                       Length : 0038

[02Ch 0044   2]                  Range Index : 0001
[02Eh 0046   2]        Flags (decoded below) : 0002
                   Add/Online Operation Only : 0
                      Proximity Domain Valid : 1
[030h 0048   4]                     Reserved : 00000000
[034h 0052   4]             Proximity Domain : 00000000
[038h 0056  16]             Region Type GUID :
66F0D379-B4F3-4074-AC43-0D3318B78CDB
[048h 0072   8]           Address Range Base : 0000004080000000
[050h 0080   8]         Address Range Length : 0000000400000000
[058h 0088   8]         Memory Map Attribute : 0000000000008008
[..]
[0E0h 0224   2]                Subtable Type : 0000 [System Physical
Address Range]
[0E2h 0226   2]                       Length : 0038

[0E4h 0228   2]                  Range Index : 0002
[0E6h 0230   2]        Flags (decoded below) : 0002
                   Add/Online Operation Only : 0
                      Proximity Domain Valid : 1
[0E8h 0232   4]                     Reserved : 00000000
[0ECh 0236   4]             Proximity Domain : 00000000
[0F0h 0240  16]             Region Type GUID :
66F0D379-B4F3-4074-AC43-0D3318B78CDB
[100h 0256   8]           Address Range Base : 0000004480000000
[108h 0264   8]         Address Range Length : 0000000400000000
[110h 0272   8]         Memory Map Attribute : 0000000000008008


...so Linux is being handed an NFIT with 2 regions. So the 4.16
interpretation looks correct to me. Are you sure you only changed
kernel versions and did not also do a BIOS update? If not the 4.7
result looks like a bug for this NFIT.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-06-24 16:56 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-19 23:17 Question on PMEM regions (Linux 4.9 Kernel & above) Ananth, Rajesh
2020-06-19 23:34 ` Dan Williams
     [not found]   ` <BYAPR04MB4310B8A76F318E50237447E294980@BYAPR04MB4310.namprd04.prod.outlook.com>
2020-06-20  0:11     ` Dan Williams
2020-06-20  2:02       ` Ananth, Rajesh
2020-06-20  3:20         ` Dan Williams
2020-06-24 16:56           ` Ananth, Rajesh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).