All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Alex Sierra <alex.sierra@amd.com>,
	Felix.Kuehling@amd.com, linux-mm@kvack.org, rcampbell@nvidia.com,
	linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	hch@lst.de, jglisse@redhat.com, apopple@nvidia.com
Subject: Re: [PATCH v1 00/12] MEMORY_DEVICE_COHERENT for CPU-accessible coherent device memory
Date: Tue, 12 Oct 2021 12:03:22 -0700	[thread overview]
Message-ID: <20211012120322.224d88dad0188160a40dd615@linux-foundation.org> (raw)
In-Reply-To: <20211012185629.GZ2744544@nvidia.com>

On Tue, 12 Oct 2021 15:56:29 -0300 Jason Gunthorpe <jgg@nvidia.com> wrote:

> > To what other uses will this infrastructure be put?
> > 
> > Because I must ask: if this feature is for one single computer which
> > presumably has a custom kernel, why add it to mainline Linux?
> 
> Well, it certainly isn't just "one single computer". Overall I know of
> about, hmm, ~10 *datacenters* worth of installations that are using
> similar technology underpinnings.
> 
> "Frontier" is the code name for a specific installation but as the
> technology is proven out there will be many copies made of that same
> approach.
> 
> The previous program "Summit" was done with NVIDIA GPUs and PowerPC
> CPUs and also included a very similar capability. I think this is a
> good sign that this coherently attached accelerator will continue to
> be a theme in computing going foward. IIRC this was done using out of
> tree kernel patches and NUMA localities.
> 
> Specifically with CXL now being standardized and on a path to ubiquity
> I think we will see an explosion in deployments of coherently attached
> accelerator memory. This is the high end trickling down to wider
> usage.
> 
> I strongly think many CXL accelerators are going to want to manage
> their on-accelerator memory in this way as it makes universal sense to
> want to carefully manage memory access locality to optimize for
> performance.

Thanks.  Can we please get something like the above into the [0/n]
changelog?  Along with any other high-level info which is relevant?

It's rather important.  "why should I review this", "why should we
merge this", etc.


  reply	other threads:[~2021-10-12 19:05 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-12 17:12 [PATCH v1 00/12] MEMORY_DEVICE_COHERENT for CPU-accessible coherent device memory Alex Sierra
2021-10-12 17:12 ` [PATCH v1 01/12] ext4/xfs: add page refcount helper Alex Sierra
2021-10-12 17:12 ` [PATCH v1 02/12] mm: remove extra ZONE_DEVICE struct page refcount Alex Sierra
2021-10-12 17:12 ` [PATCH v1 03/12] mm: add zone device coherent type memory support Alex Sierra
2021-10-12 17:12 ` [PATCH v1 04/12] mm: add device coherent vma selection for memory migration Alex Sierra
2021-10-12 17:12 ` [PATCH v1 05/12] drm/amdkfd: ref count init for device pages Alex Sierra
2021-10-12 17:12 ` [PATCH v1 06/12] drm/amdkfd: add SPM support for SVM Alex Sierra
2021-10-12 17:12 ` [PATCH v1 07/12] drm/amdkfd: coherent type as sys mem on migration to ram Alex Sierra
2021-10-12 17:12 ` [PATCH v1 08/12] lib: test_hmm add ioctl to get zone device type Alex Sierra
2021-10-12 17:12 ` [PATCH v1 09/12] lib: test_hmm add module param for " Alex Sierra
2021-10-12 17:12 ` [PATCH v1 10/12] lib: add support for device coherent type in test_hmm Alex Sierra
2021-10-12 17:12 ` [PATCH v1 11/12] tools: update hmm-test to support device coherent type Alex Sierra
2021-10-12 17:12 ` [PATCH v1 12/12] tools: update test_hmm script to support SP config Alex Sierra
2021-10-12 18:39 ` [PATCH v1 00/12] MEMORY_DEVICE_COHERENT for CPU-accessible coherent device memory Andrew Morton
2021-10-12 18:56   ` Jason Gunthorpe
2021-10-12 19:03     ` Andrew Morton [this message]
2021-10-12 23:04       ` Felix Kuehling
2021-10-13 13:34     ` Daniel Vetter
2021-10-12 19:00   ` Felix Kuehling
2021-10-12 19:11   ` Matthew Wilcox
2021-10-12 20:24     ` Felix Kuehling
2021-10-12 20:44       ` Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211012120322.224d88dad0188160a40dd615@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=Felix.Kuehling@amd.com \
    --cc=alex.sierra@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=apopple@nvidia.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=jgg@nvidia.com \
    --cc=jglisse@redhat.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=rcampbell@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.