linux-cxl.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vikram Sethi <vsethi@nvidia.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
	"Natu, Mahesh" <mahesh.natu@intel.com>,
	"Rudoff, Andy" <andy.rudoff@intel.com>,
	Jeff Smith <JSMITH@nvidia.com>,
	Mark Hairgrove <mhairgrove@nvidia.com>,
	"jglisse@redhat.com" <jglisse@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Linux MM <linux-mm@kvack.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	"will@kernel.org" <will@kernel.org>,
	"anshuman.khandual@arm.com" <anshuman.khandual@arm.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Ard Biesheuvel <Ard.Biesheuvel@arm.com>
Subject: RE: Onlining CXL Type2 device coherent memory
Date: Fri, 30 Oct 2020 22:39:49 +0000	[thread overview]
Message-ID: <BL0PR12MB2532DDE990282976888108D2BD150@BL0PR12MB2532.namprd12.prod.outlook.com> (raw)
In-Reply-To: <CAPcyv4jWFf0=VoA2EiXPaQphA-5z9JFO8h0Agy0dO0w6nDyorw@mail.gmail.com>

Hi Dan, 
> From: Dan Williams <dan.j.williams@intel.com>
> On Wed, Oct 28, 2020 at 4:06 PM Vikram Sethi <vsethi@nvidia.com> wrote:
> >
> > Hello,
> >
> > I wanted to kick off a discussion on how Linux onlining of CXL [1] type 2 device
> > Coherent memory aka Host managed device memory (HDM) will work for type 2
> CXL
> > devices which are available/plugged in at boot. A type 2 CXL device can be
> simply
> > thought of as an accelerator with coherent device memory, that also has a
> > CXL.cache to cache system memory.
> >
> > One could envision that BIOS/UEFI could expose the HDM in EFI memory map
> > as conventional memory as well as in ACPI SRAT/SLIT/HMAT. However, at least
> > on some architectures (arm64) EFI conventional memory available at kernel boot
> > memory cannot be offlined, so this may not be suitable on all architectures.
> 
> That seems an odd restriction. Add David, linux-mm, and linux-acpi as
> they might be interested / have comments on this restriction as well.
> 
> > Further, the device driver associated with the type 2 device/accelerator may
> > want to save off a chunk of HDM for driver private use.
> > So it seems the more appropriate model may be something like dev dax model
> > where the device driver probe/open calls add_memory_driver_managed, and
> > the driver could choose how much of the HDM it wants to reserve and how
> > much to make generally available for application mmap/malloc.
> 
> Sure, it can always be driver managed. The trick will be getting the
> platform firmware to agree to not map it by default, but I suspect
> you'll have a hard time convincing platform-firmware to take that
> stance. The BIOS does not know, and should not care what OS is booting
> when it produces the memory map. So I think CXL memory unplug after
> the fact is more realistic than trying to get the BIOS not to map it.
> So, to me it looks like arm64 needs to reconsider its unplug stance.

Agree. Cc Anshuman, Will, Catalin, Ard, in case I missed something in
Anshuman's patches adding arm64 memory remove, or if any plans to remove
the limitation.
 
> > Another thing to think about is whether the kernel relies on UEFI having fully
> > described NUMA proximity domains and end-end NUMA distances for HDM,
> > or whether the kernel will provide some infrastructure to make use of the
> > device-local affinity information provided by the device in the Coherent Device
> > Attribute Table (CDAT) via a mailbox, and use that to add a new NUMA node ID
> > for the HDM, and with the NUMA distances calculated by adding to the NUMA
> > distance of the host bridge/Root port with the device local distance. At least
> > that's how I think CDAT is supposed to work when kernel doesn't want to rely
> > on BIOS tables.
> 
> The kernel can supplement the NUMA configuration from CDAT, but not if
> the memory is already enumerated in the EFI Memory Map and ACPI
> SRAT/HMAT. At that point CDAT is a nop because the BIOS has precluded
> the OS from consuming it.

That makes sense.

> > A similar question on NUMA node ID and distances for HDM arises for CXL
> hotplug.
> > Will the kernel rely on CDAT, and create its own NUMA node ID and patch up
> > distances, or will it rely on BIOS providing PXM domain reserved at boot in
> > SRAT to be used later on hotplug?
> 
> I don't expect the kernel to merge any CDAT data into the ACPI tables.
> Instead the kernel will optionally use CDAT as an alternative method
> to generate Linux NUMA topology independent of ACPI SRAT. Think of it
> like Linux supporting both ACPI and Open Firmware NUMA descriptions at
> the same time. CDAT is its own NUMA description domain unless BIOS has
> blurred the lines and pre-incorporated it into SRAT/HMAT. That said I
> think the CXL attached memory not described by EFI / ACPI is currently
> the NULL set.

What I meant by patch/merge was if on a dual socket system with distance 40
between the sockets (not getting into HMAT vs SLIT description of latency),
if you hotplugged in a CXL type2/3 device whose CDAT says device local 'distance'
is 80, then the kernel is still merging that 80 in with the 40 to the remote socket
to say 120 from remote socket CPU to this socket's CXL device i.e whether the
40 came from SLIT or HMAT, it is still merged into the data kernel had obtained
from ACPI. I think you're saying the same thing in a different way:
that the device local part is not being merged with anything ACPI provided for 
the device, example _SLI at time of hotplug (which I agree with).

Vikram

  parent reply	other threads:[~2020-10-30 22:40 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-28 23:05 Onlining CXL Type2 device coherent memory Vikram Sethi
2020-10-29 14:50 ` Ben Widawsky
2020-10-30 20:37 ` Dan Williams
2020-10-30 20:59   ` Matthew Wilcox
2020-10-30 23:38     ` Dan Williams
2020-10-30 22:39   ` Vikram Sethi [this message]
2020-11-02 17:47     ` Dan Williams
2020-10-31 10:21   ` David Hildenbrand
2020-10-31 16:51     ` Dan Williams
2020-11-02  9:51       ` David Hildenbrand
2020-11-02 16:17         ` Vikram Sethi
2020-11-02 17:53           ` David Hildenbrand
2020-11-02 18:03             ` Dan Williams
2020-11-02 19:25               ` Vikram Sethi
2020-11-02 19:45                 ` Dan Williams
2020-11-03  3:56                 ` Alistair Popple
2020-11-02 18:34       ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BL0PR12MB2532DDE990282976888108D2BD150@BL0PR12MB2532.namprd12.prod.outlook.com \
    --to=vsethi@nvidia.com \
    --cc=Ard.Biesheuvel@arm.com \
    --cc=JSMITH@nvidia.com \
    --cc=andy.rudoff@intel.com \
    --cc=anshuman.khandual@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=dan.j.williams@intel.com \
    --cc=david@redhat.com \
    --cc=jglisse@redhat.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mahesh.natu@intel.com \
    --cc=mhairgrove@nvidia.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).