linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alan Cox <alan@lxorguk.ukuu.org.uk>
To: Luke Kenneth Casson Leighton <luke.leighton@gmail.com>
Cc: paulmck@linux.vnet.ibm.com, Will Newton <will.newton@gmail.com>,
	linux-kernel@vger.kernel.org
Subject: Re: advice sought: practicality of SMP cache coherency implemented in assembler (and a hardware detect line)
Date: Tue, 29 Mar 2011 10:16:30 +0100	[thread overview]
Message-ID: <20110329101630.1f1f0364@lxorguk.ukuu.org.uk> (raw)
In-Reply-To: <AANLkTimXztH_4f1=7-Ez6j-7UESroqwBvVuDNW7Fmewb@mail.gmail.com>

>  hmmm, the question is, therefore: would the MOSIX DSM solution be
> preferable, which i presume assumes that memory cannot be shared at
> all, to a situation where you *could* at least get cache coherency in
> userspace, if you're happy to tolerate a software interrupt handler
> flushing the cache line manually?

In theory DSM goes further than this. One way to think about DSM is cache
coherency in software with a page size granularity. So you could imagine
a hypothetical example where the physical MMU of each node and a memory
manager layer comnunicating between them implemented a virtualised
machine on top which was cache coherent.

The detail (and devil no doubt) is in the performance.

Basically however providing your MMU can trap both reads and writes you
can implement a MESI cache in software. Mosix just took this to an
extreme as part of a distributed Unix (originally V7 based).

So you've got

Modified: page on one node, MMU set to fault on any other so you can
	  fix it up

Exclusive: page on one node, MMU set to fault on any other or on writes
	   by self (latter taking you to modified so you know to write
	   back)

Shared:    any write set to be caught by the MMU, the fun bit then is
	   handling invalidating across other nodes with the page in
	   cache. (and the fact multiple nodes may fault the page at once)

Invalid:   our copy is invalid (its M or E elsewhere probably), MMU set so
	   we fault on any access. For shared this is also relevant so
	   you can track for faster invalidates

and the rest is a software problem.

  parent reply	other threads:[~2011-03-29  9:16 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-03-25 21:52 advice sought: practicality of SMP cache coherency implemented in assembler (and a hardware detect line) Luke Kenneth Casson Leighton
2011-03-25 22:41 ` Will Newton
2011-03-26 12:08   ` Alan Cox
2011-03-28 18:06     ` Paul E. McKenney
2011-03-28 18:48       ` Luke Kenneth Casson Leighton
2011-03-28 22:12         ` Paul E. McKenney
2011-03-28 22:18         ` Alan Cox
2011-03-28 23:38           ` Luke Kenneth Casson Leighton
2011-03-28 23:39             ` Luke Kenneth Casson Leighton
2011-03-28 23:53               ` Paul E. McKenney
2011-03-29  9:16             ` Alan Cox [this message]
2011-04-07 12:09               ` Luke Kenneth Casson Leighton
2011-04-08 16:24                 ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110329101630.1f1f0364@lxorguk.ukuu.org.uk \
    --to=alan@lxorguk.ukuu.org.uk \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luke.leighton@gmail.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=will.newton@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).