linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Eliot Moss <moss@roc.cs.umass.edu>
To: lizhijian@fujitsu.com
Cc: Moss@cs.umass.edu, kexec@lists.infradead.org, linux-mm@kvack.org,
	nvdimm@lists.linux.dev, dan.j.williams@intel.com
Subject: Re: nvdimm,pmem: makedumpfile: __vtop4_x86_64: Can't get a valid pte.
Date: Tue, 29 Nov 2022 00:22:44 -0500	[thread overview]
Message-ID: <70F971CF-1A96-4D87-B70C-B971C2A1747C@roc.cs.umass.edu> (raw)
In-Reply-To: <bd310eeb-7da1-ffe5-a25e-b4871ff3485d@fujitsu.com>

Glad you found it. Any thoughts/reactions?  EM

Sent from my iPhone

> On Nov 29, 2022, at 12:17 AM, lizhijian@fujitsu.com wrote:
> 
> 
> 
>> On 28/11/2022 23:03, Eliot Moss wrote:
>>> On 11/28/2022 9:46 AM, lizhijian@fujitsu.com wrote:
>>> 
>>> 
>>> On 28/11/2022 20:53, Eliot Moss wrote:
>>>> On 11/28/2022 7:04 AM, lizhijian@fujitsu.com wrote:
>>>>> Hi folks,
>>>>> 
>>>>> I'm going to make crash coredump support pmem region. So
>>>>> I have modified kexec-tools to add pmem region to PT_LOAD of vmcore.
>>>>> 
>>>>> But it failed at makedumpfile, log are as following:
>>>>> 
>>>>> In my environment, i found the last 512 pages in pmem region will cause the error.
>>>> 
>>>> I wonder if an issue I reported is related: when set up to map
>>>> 2Mb (huge) pages, the last 2Mb of a large region got mapped as
>>>> 4Kb pages, and then later, half of a large region was treated
>>>> that way.
>>>> 
>>> Could you share the url/link ? I'd like to take a look
>> 
>> It was in a previous email to the nvdimm list.  the title was:
>> 
>> "Possible PMD (huge pages) bug in fs dax"
>> 
>> And here is the body.  I just sent directly to the list so there
>> is no URL (if I should be engaging in a different way, please let me know):
> 
> I found it :) at
> https://www.mail-archive.com/nvdimm@lists.linux.dev/msg02743.html
> 
> 
>> ================================================================================
>> Folks - I posted already on nvdimm, but perhaps the topic did not quite grab
>> anyone's attention.  I had had some trouble figuring all the details to get
>> dax mapping of files from an xfs file system with underlying Optane DC memory
>> going, but now have that working reliably.  But there is an odd behavior:
>> 
>> When first mapping a file, I request mapping a 32 Gb range, aligned on a 1 Gb
>> (and thus clearly on a 2 Mb) boundary.
>> 
>> For each group of 8 Gb, the first 4095 entries map with a 2 Mb huge (PMD)
>> page.  The 4096th one does FALLBACK.  I suspect some problem in
>> dax.c:grab_mapping_entry or its callees, but am not personally well enough
>> versed in either the dax code or the xarray implementation to dig further.
>> 
>> 
>> If you'd like a second puzzle 😄 ... after completing this mapping, another
>> thread accesses the whole range sequentially.  This results in NOPAGE fault
>> handling for the first 4095+4095 2M regions that previously resulted in
>> NOPAGE -- so far so good.  But it gives FALLBACK for the upper 16 Gb (except
>> the two PMD regions it alrady gave FALLBACK for).
>> 
>> 
>> I can provide trace output from a run if you'd like and all the ndctl, gdisk
>> -l, fdisk -l, and xfs_info details if you like.
>> 
>> 
>> In my application, it would be nice if dax.c could deliver 1 Gb PUD size
>> mappings as well, though it would appear that that would require more surgery
>> on dax.c.  It would be somewhat analogous to what's already there, of course,
>> but I don't mean to minimize the possible trickiness of it.  I realize I
>> should submit that request as a separate thread 😄 which I intend to do
>> later.
>> ================================================================================
>> 
>> Regards - Eliot Moss



  reply	other threads:[~2022-11-29 13:41 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-28 12:04 nvdimm,pmem: makedumpfile: __vtop4_x86_64: Can't get a valid pte lizhijian
     [not found] ` <103666d5-3dcf-074c-0057-76b865f012a6@cs.umass.edu>
2022-11-28 14:46   ` lizhijian
2022-11-28 15:03     ` Eliot Moss
2022-11-29  5:16       ` lizhijian
2022-11-29  5:22         ` Eliot Moss [this message]
2022-11-30 20:05 ` Dan Williams
2022-12-01  9:42   ` lizhijian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=70F971CF-1A96-4D87-B70C-B971C2A1747C@roc.cs.umass.edu \
    --to=moss@roc.cs.umass.edu \
    --cc=Moss@cs.umass.edu \
    --cc=dan.j.williams@intel.com \
    --cc=kexec@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=lizhijian@fujitsu.com \
    --cc=nvdimm@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).