All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: "Figo.zhang" <figo1802@gmail.com>
Cc: lsf-pc@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
	Anshuman Khandual <khandual@linux.vnet.ibm.com>,
	Balbir Singh <bsingharora@gmail.com>,
	Dan Williams <dan.j.williams@intel.com>,
	John Hubbard <jhubbard@nvidia.com>,
	Jonathan Masters <jcm@redhat.com>,
	Ross Zwisler <ross.zwisler@linux.intel.com>
Subject: Re: [LSF/MM TOPIC] CAPI/CCIX cache coherent device memory (NUMA too ?)
Date: Tue, 16 Jan 2018 21:30:24 -0500	[thread overview]
Message-ID: <20180117023024.GB3492@redhat.com> (raw)
In-Reply-To: <CAF7GXvpsAPhHWFV3g9LdzKg6Fe=Csp+kecG+HznoaT0Hiu9HCw@mail.gmail.com>

On Wed, Jan 17, 2018 at 09:55:14AM +0800, Figo.zhang wrote:
> 2018-01-17 5:03 GMT+08:00 Jerome Glisse <jglisse@redhat.com>:
> 
> > CAPI (on IBM Power8 and 9) and CCIX are two new standard that
> > build on top of existing interconnect (like PCIE) and add the
> > possibility for cache coherent access both way (from CPU to
> > device memory and from device to main memory). This extend
> > what we are use to with PCIE (where only device to main memory
> > can be cache coherent but not CPU to device memory).
> >
> 
> the UPI bus also support cache coherency for Intel platform, right?

AFAIK the UPI only apply between processors and is not expose to devices
except integrated Intel devices (like Intel GPU or FPGA) thus it is less
generic/open than CAPI/CCIX.

> it seem the specification of CCIX/CAPI protocol is not public, we cannot
> know the details about them, your topic will cover the details?

I can only cover what will be public at the time of summit but for
the sake of discussion the important characteristic is the cache
coherency aspect. Discussing how it is implemented, cache line
protocol and all the gory details of protocol is of little interest
from kernel point of view.


> > How is this memory gonna be expose to the kernel and how the
> > kernel gonna expose this to user space is the topic i want to
> > discuss. I believe this is highly device specific for instance
> > for GPU you want the device memory allocation and usage to be
> > under the control of the GPU device driver. Maybe other type
> > of device want different strategy.
> >
> i see it lack of some simple example for how to use the HMM, because
> GPU driver is more complicate for linux driver developer  except the
> ATI/NVIDIA developers.

HMM require a device with an MMU and capable of pausing workload that
do pagefault. Only devices complex enough i know of are GPU, Infiniband
and FPGA. HMM from feedback i had so far is that most people working on
any such device driver understand HMM. I am always happy to answer any
specific questions on the API and how it is intended to be use by device
driver (and improve kernel documentation in the process).

How HMM functionality is then expose to userspace by the device driver
is under the control of each individual device driver.

Cheers,
Jerome

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2018-01-17  2:30 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-16 21:03 [LSF/MM TOPIC] CAPI/CCIX cache coherent device memory (NUMA too ?) Jerome Glisse
2018-01-17  1:32 ` Liubo(OS Lab)
2018-01-17 16:43   ` Balbir Singh
2018-01-17  1:55 ` Figo.zhang
2018-01-17  2:30   ` Jerome Glisse [this message]
2018-01-17 16:29 ` Balbir Singh
2018-01-19  5:14   ` John Hubbard
2018-01-26 18:47 ` Ross Zwisler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180117023024.GB3492@redhat.com \
    --to=jglisse@redhat.com \
    --cc=bsingharora@gmail.com \
    --cc=dan.j.williams@intel.com \
    --cc=figo1802@gmail.com \
    --cc=jcm@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=khandual@linux.vnet.ibm.com \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=ross.zwisler@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.