linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Huaisheng HS1 Ye <yehs1@lenovo.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Jeff Moyer <jmoyer@redhat.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Michal Hocko <mhocko@suse.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
	NingTing Cheng <chengnt@lenovo.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"pasha.tatashin@oracle.com" <pasha.tatashin@oracle.com>,
	Linux MM <linux-mm@kvack.org>, "colyli@suse.de" <colyli@suse.de>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Sasha Levin <alexander.levin@verizon.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Vlastimil Babka <vbabka@suse.cz>, Ocean HY1 He <hehy1@lenovo.com>
Subject: RE: [External]  Re: [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone
Date: Tue, 15 May 2018 16:07:28 +0000	[thread overview]
Message-ID: <HK2PR03MB1684B34F9D1DF18A8CDE18F292930@HK2PR03MB1684.apcprd03.prod.outlook.com> (raw)
In-Reply-To: <20180510162742.GA30442@bombadil.infradead.org>




> From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On Behalf Of Matthew
> Wilcox
> Sent: Friday, May 11, 2018 12:28 AM
> On Wed, May 09, 2018 at 04:47:54AM +0000, Huaisheng HS1 Ye wrote:
> > > On Tue, May 08, 2018 at 02:59:40AM +0000, Huaisheng HS1 Ye wrote:
> > > > Currently in our mind, an ideal use scenario is that, we put all page caches to
> > > > zone_nvm, without any doubt, page cache is an efficient and common cache
> > > > implement, but it has a disadvantage that all dirty data within it would has risk
> > > > to be missed by power failure or system crash. If we put all page caches to NVDIMMs,
> > > > all dirty data will be safe.
> > >
> > > That's a common misconception.  Some dirty data will still be in the
> > > CPU caches.  Are you planning on building servers which have enough
> > > capacitance to allow the CPU to flush all dirty data from LLC to NV-DIMM?
> > >
> > Sorry for not being clear.
> > For CPU caches if there is a power failure, NVDIMM has ADR to guarantee an interrupt
> will be reported to CPU, an interrupt response function should be responsible to flush
> all dirty data to NVDIMM.
> > If there is a system crush, perhaps CPU couldn't have chance to execute this response.
> >
> > It is hard to make sure everything is safe, what we can do is just to save the dirty
> data which is already stored to Pagecache, but not in CPU cache.
> > Is this an improvement than current?
> 
> No.  In the current situation, the user knows that either the entire
> page was written back from the pagecache or none of it was (at least
> with a journalling filesystem).  With your proposal, we may have pages
> splintered along cacheline boundaries, with a mix of old and new data.
> This is completely unacceptable to most customers.

Dear Matthew,

Thanks for your great help, I really didn't consider this case.
I want to make it a little bit clearer to me. So, correct me if anything wrong.

Is that to say this mix of old and new data in one page, which only has chance to happen when CPU failed to flush all dirty data from LLC to NVDIMM?
But if an interrupt can be reported to CPU, and CPU successfully flush all dirty data from cache lines to NVDIMM within interrupt response function, this mix of old and new data can be avoided.

Current X86_64 uses N-way set associative cache, and every cache line has 64 bytes.
For 4096 bytes page, one page shall be splintered to 64 (4096/64) lines. Is it right?


> > > Then there's the problem of reconnecting the page cache (which is
> > > pointed to by ephemeral data structures like inodes and dentries) to
> > > the new inodes.
> > Yes, it is not easy.
> 
> Right ... and until we have that ability, there's no point in this patch.
We are focusing to realize this ability.

Sincerely,
Huaisheng Ye

  reply	other threads:[~2018-05-15 16:07 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-07 14:50 [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone Huaisheng Ye
2018-05-07 14:50 ` [RFC PATCH v1 4/6] arch/x86/kernel: mark NVDIMM regions from e820_table Huaisheng Ye
2018-05-07 18:46 ` [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone Matthew Wilcox
2018-05-07 18:57   ` Dan Williams
2018-05-07 19:08     ` Jeff Moyer
2018-05-07 19:17       ` Dan Williams
2018-05-07 19:28         ` Jeff Moyer
2018-05-07 19:29           ` Dan Williams
2018-05-08  2:59       ` [External] " Huaisheng HS1 Ye
2018-05-08  3:09         ` Matthew Wilcox
2018-05-09  4:47           ` Huaisheng HS1 Ye
2018-05-10 16:27             ` Matthew Wilcox
2018-05-15 16:07               ` Huaisheng HS1 Ye [this message]
2018-05-15 16:20                 ` Matthew Wilcox
2018-05-16  2:05                   ` Huaisheng HS1 Ye
2018-05-16  2:48                     ` Dan Williams
2018-05-16  8:33                       ` Huaisheng HS1 Ye
2018-05-16  2:52                     ` Matthew Wilcox
2018-05-16  4:10                       ` Dan Williams
2018-05-08  3:52         ` Dan Williams
2018-05-07 19:18     ` Matthew Wilcox
2018-05-07 19:30       ` Dan Williams
2018-05-08  0:54   ` [External] " Huaisheng HS1 Ye

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=HK2PR03MB1684B34F9D1DF18A8CDE18F292930@HK2PR03MB1684.apcprd03.prod.outlook.com \
    --to=yehs1@lenovo.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.levin@verizon.com \
    --cc=chengnt@lenovo.com \
    --cc=colyli@suse.de \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=hehy1@lenovo.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=pasha.tatashin@oracle.com \
    --cc=penguin-kernel@i-love.sakura.ne.jp \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).