linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Dongsheng Yang <dongsheng.yang@easystack.cn>
Cc: Gregory Price <gregory.price@memverge.com>,
	Dan Williams <dan.j.williams@intel.com>,
	John Groves <John@groves.net>, <axboe@kernel.dk>,
	<linux-block@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<linux-cxl@vger.kernel.org>, <nvdimm@lists.linux.dev>,
	<james.morse@arm.com>, Mark Rutland <mark.rutland@arm.com>
Subject: Re: [PATCH RFC 0/7] block: Introduce CBD (CXL Block Device)
Date: Fri, 31 May 2024 20:22:42 -0700	[thread overview]
Message-ID: <665a9402445ee_166872941d@dwillia2-mobl3.amr.corp.intel.com.notmuch> (raw)
In-Reply-To: <20240530143813.00006def@Huawei.com>

Jonathan Cameron wrote:
> On Thu, 30 May 2024 14:59:38 +0800
> Dongsheng Yang <dongsheng.yang@easystack.cn> wrote:
> 
> > 在 2024/5/29 星期三 下午 11:25, Gregory Price 写道:
> > > On Wed, May 22, 2024 at 02:17:38PM +0800, Dongsheng Yang wrote:  
> > >>
> > >>
> > >> 在 2024/5/22 星期三 上午 2:41, Dan Williams 写道:  
> > >>> Dongsheng Yang wrote:
> > >>>
> > >>> What guarantees this property? How does the reader know that its local
> > >>> cache invalidation is sufficient for reading data that has only reached
> > >>> global visibility on the remote peer? As far as I can see, there is
> > >>> nothing that guarantees that local global visibility translates to
> > >>> remote visibility. In fact, the GPF feature is counter-evidence of the
> > >>> fact that writes can be pending in buffers that are only flushed on a
> > >>> GPF event.  
> > >>
> > >> Sounds correct. From what I learned from GPF, ADR, and eADR, there would
> > >> still be data in WPQ even though we perform a CPU cache line flush in the
> > >> OS.
> > >>
> > >> This means we don't have a explicit method to make data puncture all caches
> > >> and land in the media after writing. also it seems there isn't a explicit
> > >> method to invalidate all caches along the entire path.
> > >>  
> > >>>
> > >>> I remain skeptical that a software managed inter-host cache-coherency
> > >>> scheme can be made reliable with current CXL defined mechanisms.  
> > >>
> > >>
> > >> I got your point now, acorrding current CXL Spec, it seems software managed
> > >> cache-coherency for inter-host shared memory is not working. Will the next
> > >> version of CXL spec consider it?  
> > >>>  
> > > 
> > > Sorry for missing the conversation, have been out of office for a bit.
> > > 
> > > It's not just a CXL spec issue, though that is part of it. I think the
> > > CXL spec would have to expose some form of puncturing flush, and this
> > > makes the assumption that such a flush doesn't cause some kind of
> > > race/deadlock issue.  Certainly this needs to be discussed.
> > > 
> > > However, consider that the upstream processor actually has to generate
> > > this flush.  This means adding the flush to existing coherence protocols,
> > > or at the very least a new instruction to generate the flush explicitly.
> > > The latter seems more likely than the former.
> > > 
> > > This flush would need to ensure the data is forced out of the local WPQ
> > > AND all WPQs south of the PCIE complex - because what you really want to
> > > know is that the data has actually made it back to a place where remote
> > > viewers are capable of percieving the change.
> > > 
> > > So this means:
> > > 1) Spec revision with puncturing flush
> > > 2) Buy-in from CPU vendors to generate such a flush
> > > 3) A new instruction added to the architecture.
> > > 
> > > Call me in a decade or so.
> > > 
> > > 
> > > But really, I think it likely we see hardware-coherence well before this.
> > > For this reason, I have become skeptical of all but a few memory sharing
> > > use cases that depend on software-controlled cache-coherency.  
> > 
> > Hi Gregory,
> > 
> > 	From my understanding, we actually has the same idea here. What I am 
> > saying is that we need SPEC to consider this issue, meaning we need to 
> > describe how the entire software-coherency mechanism operates, which 
> > includes the necessary hardware support. Additionally, I agree that if 
> > software-coherency also requires hardware support, it seems that 
> > hardware-coherency is the better path.
> > > 
> > > There are some (FAMFS, for example). The coherence state of these
> > > systems tend to be less volatile (e.g. mappings are read-only), or
> > > they have inherent design limitations (cacheline-sized message passing
> > > via write-ahead logging only).  
> > 
> > Can you explain more about this? I understand that if the reader in the 
> > writer-reader model is using a readonly mapping, the interaction will be 
> > much simpler. However, after the writer writes data, if we don't have a 
> > mechanism to flush and invalidate puncturing all caches, how can the 
> > readonly reader access the new data?
> 
> There is a mechanism for doing coarse grained flushing that is known to
> work on some architectures. Look at cpu_cache_invalidate_memregion().
> On intel/x86 it's wbinvd_on_all_cpu_cpus()

There is no guarantee on x86 that after cpu_cache_invalidate_memregion()
that a remote shared memory consumer can be assured to see the writes
from that event.

> on arm64 it's a PSCI firmware call CLEAN_INV_MEMREGION (there is a
> public alpha specification for PSCI 1.3 with that defined but we
> don't yet have kernel code.)

That punches visibility through CXL shared memory devices?

> These are very big hammers and so unsuited for anything fine grained.
> In the extreme end of possible implementations they briefly stop all
> CPUs and clean and invalidate all caches of all types.  So not suited
> to anything fine grained, but may be acceptable for a rare setup event,
> particularly if the main job of the writing host is to fill that memory
> for lots of other hosts to use.
> 
> At least the ARM one takes a range so allows for a less painful
> implementation.  I'm assuming we'll see new architecture over time
> but this is a different (and potentially easier) problem space
> to what you need.

cpu_cache_invalidate_memregion() is only about making sure local CPU
sees new contents after an DPA:HPA remap event. I hope CPUs are able to
get away from that responsibility long term when / if future memory
expanders just issue back-invalidate automatically when the HDM decoder
configuration changes.

  reply	other threads:[~2024-06-01  3:22 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-22  7:15 [PATCH RFC 0/7] block: Introduce CBD (CXL Block Device) Dongsheng Yang
2024-04-22  7:16 ` [PATCH 1/7] block: Init for CBD(CXL " Dongsheng Yang
2024-04-22 18:39   ` Randy Dunlap
2024-04-22 22:41     ` Dongsheng Yang
2024-04-24  3:58   ` Chaitanya Kulkarni
2024-04-24  8:36     ` Dongsheng Yang
2024-04-22  7:16 ` [PATCH 2/7] cbd: introduce cbd_transport Dongsheng Yang
2024-04-24  4:08   ` Chaitanya Kulkarni
2024-04-24  8:43     ` Dongsheng Yang
2024-04-22  7:16 ` [PATCH 3/7] cbd: introduce cbd_channel Dongsheng Yang
2024-04-22  7:16 ` [PATCH 4/7] cbd: introduce cbd_host Dongsheng Yang
2024-04-25  5:51   ` [EXTERNAL] " Bharat Bhushan
2024-04-22  7:16 ` [PATCH 5/7] cbd: introuce cbd_backend Dongsheng Yang
2024-04-24  5:03   ` Chaitanya Kulkarni
2024-04-24  8:36     ` Dongsheng Yang
2024-04-25  5:46   ` [EXTERNAL] " Bharat Bhushan
2024-04-22  7:16 ` [PATCH 7/7] cbd: add related sysfs files in transport register Dongsheng Yang
2024-04-25  5:24   ` [EXTERNAL] " Bharat Bhushan
2024-04-22 22:42 ` [PATCH 6/7] cbd: introduce cbd_blkdev Dongsheng Yang
2024-04-23  7:27   ` Dongsheng Yang
2024-04-24  4:29 ` [PATCH RFC 0/7] block: Introduce CBD (CXL Block Device) Dan Williams
2024-04-24  6:33   ` Dongsheng Yang
2024-04-24 15:14     ` Gregory Price
2024-04-26  1:25       ` Dongsheng Yang
2024-04-26 13:48         ` Gregory Price
2024-04-26 14:53           ` Dongsheng Yang
2024-04-26 16:14             ` Gregory Price
2024-04-28  5:47               ` Dongsheng Yang
2024-04-28 16:44                 ` Gregory Price
2024-04-28 16:55                 ` John Groves
2024-05-03  9:52                   ` Jonathan Cameron
2024-05-08 11:39                     ` Dongsheng Yang
2024-05-08 12:11                       ` Jonathan Cameron
2024-05-08 13:03                         ` Dongsheng Yang
2024-05-08 15:44                           ` Jonathan Cameron
2024-05-09 11:24                             ` Dongsheng Yang
2024-05-09 12:21                               ` Jonathan Cameron
2024-05-09 13:03                                 ` Dongsheng Yang
2024-05-21 18:41                                   ` Dan Williams
2024-05-22  6:17                                     ` Dongsheng Yang
2024-05-29 15:25                                       ` Gregory Price
2024-05-30  6:59                                         ` Dongsheng Yang
2024-05-30 13:38                                           ` Jonathan Cameron
2024-06-01  3:22                                             ` Dan Williams [this message]
2024-06-03 12:48                                               ` Jonathan Cameron
2024-06-03 17:28                                                 ` James Morse
2024-06-04 14:26                                                   ` Jonathan Cameron
2024-05-31 14:23                                           ` Gregory Price
2024-06-03  1:33                                             ` Dongsheng Yang
2024-04-30  0:34                 ` Dan Williams
2024-04-24 18:08     ` Dan Williams
     [not found]       ` <539c1323-68f9-d753-a102-692b69049c20@easystack.cn>
2024-04-30  0:10         ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=665a9402445ee_166872941d@dwillia2-mobl3.amr.corp.intel.com.notmuch \
    --to=dan.j.williams@intel.com \
    --cc=John@groves.net \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=dongsheng.yang@easystack.cn \
    --cc=gregory.price@memverge.com \
    --cc=james.morse@arm.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=nvdimm@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).