nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Dongsheng Yang <dongsheng.yang@easystack.cn>
Cc: John Groves <John@groves.net>,
	Dan Williams <dan.j.williams@intel.com>,
	Gregory Price <gregory.price@memverge.com>, <axboe@kernel.dk>,
	<linux-block@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<linux-cxl@vger.kernel.org>, <nvdimm@lists.linux.dev>
Subject: Re: [PATCH RFC 0/7] block: Introduce CBD (CXL Block Device)
Date: Wed, 8 May 2024 13:11:25 +0100	[thread overview]
Message-ID: <20240508131125.00003d2b@Huawei.com> (raw)
In-Reply-To: <5b7f3700-aeee-15af-59a7-8e271a89c850@easystack.cn>

On Wed, 8 May 2024 19:39:23 +0800
Dongsheng Yang <dongsheng.yang@easystack.cn> wrote:

> 在 2024/5/3 星期五 下午 5:52, Jonathan Cameron 写道:
> > On Sun, 28 Apr 2024 11:55:10 -0500
> > John Groves <John@groves.net> wrote:
> >   
> >> On 24/04/28 01:47PM, Dongsheng Yang wrote:  
> >>>
> >>>
> >>> 在 2024/4/27 星期六 上午 12:14, Gregory Price 写道:  
> >>>> On Fri, Apr 26, 2024 at 10:53:43PM +0800, Dongsheng Yang wrote:  
> >>>>>
> >>>>>
> >>>>> 在 2024/4/26 星期五 下午 9:48, Gregory Price 写道:  
> >>>>>>      
> >>>>>  
> 
> ...
> >>
> >> Just to make things slightly gnarlier, the MESI cache coherency protocol
> >> allows a CPU to speculatively convert a line from exclusive to modified,
> >> meaning it's not clear as of now whether "occasional" clean write-backs
> >> can be avoided. Meaning those read-only mappings may be more important
> >> than one might think. (Clean write-backs basically make it
> >> impossible for software to manage cache coherency.)  
> > 
> > My understanding is that clean write backs are an implementation specific
> > issue that came as a surprise to some CPU arch folk I spoke to, we will
> > need some path for a host to say if they can ever do that.
> > 
> > Given this definitely effects one CPU vendor, maybe solutions that
> > rely on this not happening are not suitable for upstream.
> > 
> > Maybe this market will be important enough for that CPU vendor to stop
> > doing it but if they do it will take a while...
> > 
> > Flushing in general is as CPU architecture problem where each of the
> > architectures needs to be clear what they do / specify that their
> > licensees do.
> > 
> > I'm with Dan on encouraging all memory vendors to do hardware coherence!  
> 
> Hi Gregory, John, Jonathan and Dan:
> 	Thanx for your information, they help a lot, and sorry for the late reply.
> 
> After some internal discussions, I think we can design it as follows:
> 
> (1) If the hardware implements cache coherence, then the software layer 
> doesn't need to consider this issue, and can perform read and write 
> operations directly.

Agreed - this is one easier case.

> 
> (2) If the hardware doesn't implement cache coherence, we can consider a 
> DMA-like approach, where we check architectural features to determine if 
> cache coherence is supported. This could be similar to 
> `dev_is_dma_coherent`.

Ok. So this would combine host support checks with checking if the shared
memory on the device is multi host cache coherent (it will be single host
cache coherent which is what makes this messy)
> 
> Additionally, if the architecture supports flushing and invalidating CPU 
> caches (`CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE`, 
> `CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU`, 
> `CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL`),

Those particular calls won't tell you much at all. They indicate that a flush
can happen as far as a common point for DMA engines in the system. No
information on whether there are caches beyond that point.

> 
> then we can handle cache coherence at the software layer.
> (For the clean writeback issue, I think it may also require 
> clarification from the architecture, and how DMA handles the clean 
> writeback problem, which I haven't further checked.)

I believe the relevant architecture only does IO coherent DMA so it is
never a problem (unlike with multihost cache coherence).
> 
> (3) If the hardware doesn't implement cache coherence and the cpu 
> doesn't support the required CPU cache operations, then we can run in 
> nocache mode.

I suspect that gets you no where either.  Never believe an architecture
that provides a flag that says not to cache something.  That just means
you should not be able to tell that it is cached - many many implementations
actually cache such accesses.

> 
> CBD can initially support (3), and then transition to (1) when hardware 
> supports cache-coherency. If there's sufficient market demand, we can 
> also consider supporting (2).
I'd assume only (3) works.  The others rely on assumptions I don't think
you can rely on.

Fun fun fun,

Jonathan

> 
> How does this approach sound?
> 
> Thanx
> > 
> > J
> >   
> >>
> >> Keep in mind that I don't think anybody has cxl 3 devices or CPUs yet, and
> >> shared memory is not explicitly legal in cxl 2, so there are things a cpu
> >> could do (or not do) in a cxl 2 environment that are not illegal because
> >> they should not be observable in a no-shared-memory environment.
> >>
> >> CBD is interesting work, though for some of the reasons above I'm somewhat
> >> skeptical of shared memory as an IPC mechanism.
> >>
> >> Regards,
> >> John
> >>
> >>
> >>  
> > 
> > .
> >   


  reply	other threads:[~2024-05-08 12:11 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20240422071606.52637-1-dongsheng.yang@easystack.cn>
     [not found] ` <66288ac38b770_a96f294c6@dwillia2-mobl3.amr.corp.intel.com.notmuch>
     [not found]   ` <ef34808b-d25d-c953-3407-aa833ad58e61@easystack.cn>
     [not found]     ` <ZikhwAAIGFG0UU23@memverge.com>
     [not found]       ` <bbf692ec-2109-baf2-aaae-7859a8315025@easystack.cn>
     [not found]         ` <ZiuwyIVaKJq8aC6g@memverge.com>
     [not found]           ` <98ae27ff-b01a-761d-c1c6-39911a000268@easystack.cn>
     [not found]             ` <ZivS86BrfPHopkru@memverge.com>
2024-04-28  5:47               ` [PATCH RFC 0/7] block: Introduce CBD (CXL Block Device) Dongsheng Yang
2024-04-28 16:44                 ` Gregory Price
2024-04-28 16:55                 ` John Groves
2024-05-03  9:52                   ` Jonathan Cameron
2024-05-08 11:39                     ` Dongsheng Yang
2024-05-08 12:11                       ` Jonathan Cameron [this message]
2024-05-08 13:03                         ` Dongsheng Yang
2024-05-08 15:44                           ` Jonathan Cameron
2024-05-09 11:24                             ` Dongsheng Yang
2024-05-09 12:21                               ` Jonathan Cameron
2024-05-09 13:03                                 ` Dongsheng Yang
2024-05-21 18:41                                   ` Dan Williams
     [not found]                                     ` <8f161b2d-eacd-ad35-8959-0f44c8d132b3@easystack.cn>
2024-05-29 15:25                                       ` Gregory Price
2024-04-30  0:34                 ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240508131125.00003d2b@Huawei.com \
    --to=jonathan.cameron@huawei.com \
    --cc=John@groves.net \
    --cc=axboe@kernel.dk \
    --cc=dan.j.williams@intel.com \
    --cc=dongsheng.yang@easystack.cn \
    --cc=gregory.price@memverge.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nvdimm@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).